cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Teja07
by New Contributor II
  • 1737 Views
  • 4 replies
  • 0 kudos

Resolved! Datatype mismatch while reading data from sql server to databricks

Data from Azure sql server was read into databricks through JDBC connection (spark version 2.x) and stored into Gen1. Now the client wants to migrate the data from Gen1 to Gen2. When we ran the same jobs that read data from Azure Sql Server to Databr...

  • 1737 Views
  • 4 replies
  • 0 kudos
Latest Reply
Anonymous
Not applicable
  • 0 kudos

Hi @Mani Teja G​ Thank you for your question! To assist you better, please take a moment to review the answer and let me know if it best fits your needs.Please help us select the best solution by clicking on "Select As Best" if it does.Your feedback ...

  • 0 kudos
3 More Replies
RamyaN
by New Contributor II
  • 1905 Views
  • 2 replies
  • 3 kudos

How to read enum[] (enum of array) datatype from postgres using spark

We are trying to read a column which is enum of array datatype from postgres as string datatype to target. We could able to achieve this by expilcitly using concat function while extracting like belowval jdbcDF3 = spark.read .format("jdbc") .option(...

  • 1905 Views
  • 2 replies
  • 3 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 3 kudos

You can try custom schema for JDBC read.option("customSchema", "colname STRING")

  • 3 kudos
1 More Replies
georgian2133
by New Contributor
  • 935 Views
  • 0 replies
  • 0 kudos

Getting error [DATATYPE_MISMATCH.BINARY_OP_DIFF_TYPES]

[DATATYPE_MISMATCH.BINARY_OP_DIFF_TYPES] Cannot resolve "(DocDate AND orderedhl)" due to data type mismatch: the left and right operands of the binary operator have incompatible types ("STRING" and "DECIMAL(38,6)").; line 67, pos 066. group by 67. or...

  • 935 Views
  • 0 replies
  • 0 kudos
Adalberto
by New Contributor II
  • 2750 Views
  • 4 replies
  • 2 kudos

Resolved! cannot resolve '(CAST(10000 AS BIGINT) div Khe)' due to data type mismatch:

Hi,I'm trying to create a delta table using SQL but I'm getting this errorError in SQL statement: AnalysisException: cannot resolve '(CAST(10000 AS BIGINT) div Khe)' due to data type mismatch: differing types in '(CAST(10000 AS BIGINT) div Khe)' (big...

  • 2750 Views
  • 4 replies
  • 2 kudos
Latest Reply
Noopur_Nigam
Valued Contributor II
  • 2 kudos

Hi @Adalberto Garcia Espinosa​ Do you need khe column to be double? If not, below query is working:%sql CREATE OR REPLACE TABLE Productos(Khe bigint NOT NULL,Fctor_HL_Estiba bigint GENERATED ALWAYS AS (cast(10000 as bigint) div Khe)) seems to be work...

  • 2 kudos
3 More Replies
lizou
by Contributor II
  • 2513 Views
  • 1 replies
  • 1 kudos

Never use the float data type

select float('92233464567.33') returns 92,233,466,000I am expected result will be around 92,233,464,567.xxtherefore, float data type should be avoided.Use double or decimal works as expected. But I see float data type is widely used assuming most num...

image
  • 2513 Views
  • 1 replies
  • 1 kudos
Latest Reply
Prabakar
Esteemed Contributor III
  • 1 kudos

Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.Decimal/Numeric is Fixed-Precision data type, which means that all the values in the data type range can be represented exactly w...

  • 1 kudos
Raie
by New Contributor III
  • 3586 Views
  • 3 replies
  • 4 kudos

Resolved! How do I specify column's data type with spark dataframes?

What I am doing:spark_df = spark.createDataFrame(dfnew)spark_df.write.saveAsTable("default.test_table", index=False, header=True)This automatically detects the datatypes and is working right now. BUT, what if the datatype cannot be detected or detect...

  • 3586 Views
  • 3 replies
  • 4 kudos
Latest Reply
Hubert-Dudek
Esteemed Contributor III
  • 4 kudos

just create table earlier and set column types (CREATE TABLE ... LOCATION ( path path)in dataframe you need to have corresponding data types which you can make using cast syntax, just your syntax is incorrect, here is example of correct syntax:from p...

  • 4 kudos
2 More Replies
kelleyrw
by New Contributor II
  • 6932 Views
  • 7 replies
  • 0 kudos

Resolved! How do I register a UDF that returns an array of tuples in scala/spark?

I'm relatively new to Scala. In the past, I was able to do the following python: def foo(p1, p2): import datetime as dt dt.datetime(2014, 4, 17, 12, 34) result = [ (1, "1", 1.1, dt.datetime(2014, 4, 17, 1, 0)), (2, "2", 2...

0693f000007OoHdAAK
  • 6932 Views
  • 7 replies
  • 0 kudos
Latest Reply
__max
New Contributor III
  • 0 kudos

Hello, Just in case, here is an example for proposed solution above: import org.apache.spark.sql.functions._ import org.apache.spark.sql.expressions._ import org.apache.spark.sql.types._ val data = Seq(("A", Seq((3,4),(5,6),(7,10))), ("B", Seq((-1,...

  • 0 kudos
6 More Replies
Labels