cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Failed to merge incompatible data types LongType and StringType

tassiodahora
New Contributor III

Guys, good morning!

I am writing the results of a json in a delta table, only the json structure is not always the same, if the field does not list in the json it generates type incompatibility when I append

(dfbrzagend.write

 .format("delta")

 .mode("append")

 .option("inferSchema", "true")

 .option("path",brzpath)

 .option("schema",defaultschema)

 .saveAsTable(brzbdtable))

Failed to merge fields 'age_responsavelnotafiscalpallet' and 'age_responsavelnotafiscalpallet'. Failed to merge incompatible data types LongType and StringType

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

Hi @Tássio Santos​ 

The delta table performs schema validation of every column, and the source dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.

For reference-

https://docs.databricks.com/delta/delta-batch.html#schema-validation-1

you can cast the column explicitly before writing it to target table to avoid this

View solution in original post

3 REPLIES 3

Anonymous
Not applicable

Hi @Tássio Santos​ 

The delta table performs schema validation of every column, and the source dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.

For reference-

https://docs.databricks.com/delta/delta-batch.html#schema-validation-1

you can cast the column explicitly before writing it to target table to avoid this

Kaniz_Fatma
Community Manager
Community Manager

Hi @Tássio Santos​ , We haven’t heard from you on the last response from @Chetan Kardekar​ , and I was checking back to see if you have a resolution yet. If you have any solution, please share it with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.

ifun
New Contributor II

The following example shows changing a column type:

(spark.read.table(...)
  .withColumn("birthDate", col("birthDate").cast("date"))
  .write
  .mode("overwrite")
  .option("overwriteSchema", "true")
  .saveAsTable(...)
)

Details see https://docs.databricks.com/delta/update-schema.html

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group