cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Failed to merge incompatible data types LongType and StringType

tassiodahora
New Contributor III

Guys, good morning!

I am writing the results of a json in a delta table, only the json structure is not always the same, if the field does not list in the json it generates type incompatibility when I append

(dfbrzagend.write

 .format("delta")

 .mode("append")

 .option("inferSchema", "true")

 .option("path",brzpath)

 .option("schema",defaultschema)

 .saveAsTable(brzbdtable))

Failed to merge fields 'age_responsavelnotafiscalpallet' and 'age_responsavelnotafiscalpallet'. Failed to merge incompatible data types LongType and StringType

1 ACCEPTED SOLUTION

Accepted Solutions

Anonymous
Not applicable

Hi @Tássio Santos​ 

The delta table performs schema validation of every column, and the source dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.

For reference-

https://docs.databricks.com/delta/delta-batch.html#schema-validation-1

you can cast the column explicitly before writing it to target table to avoid this

View solution in original post

3 REPLIES 3

Anonymous
Not applicable

Hi @Tássio Santos​ 

The delta table performs schema validation of every column, and the source dataframe column data types must match the column data types in the target table. If they don’t match, an exception is raised.

For reference-

https://docs.databricks.com/delta/delta-batch.html#schema-validation-1

you can cast the column explicitly before writing it to target table to avoid this

Kaniz
Community Manager
Community Manager

Hi @Tássio Santos​ , We haven’t heard from you on the last response from @Chetan Kardekar​ , and I was checking back to see if you have a resolution yet. If you have any solution, please share it with the community as it can be helpful to others. Otherwise, we will respond with more details and try to help.

ifun
New Contributor II

The following example shows changing a column type:

(spark.read.table(...)
  .withColumn("birthDate", col("birthDate").cast("date"))
  .write
  .mode("overwrite")
  .option("overwriteSchema", "true")
  .saveAsTable(...)
)

Details see https://docs.databricks.com/delta/update-schema.html

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.