cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhancement Request: DLT: Infer Schema Logic/Merge Logic

Chris_sh
New Contributor II

Currently when DLT runs it observes NULL values in a column and infers that that column should be a string by default. The next time that table runs numeric values are added and it infers that it is now a numeric column. DLT tries to merge these two columns and errors because the column schemas cannot be merged. 

I understand why the column was set to a string since it had to be set to something and a NULL value could indicate any type of field. This makes sense. The desired behavior should be that when the DLT pipeline runs It sees the schema mismatch and checks the columns. If the persisted column shows all null values and the new column being brought in has values, then the persisted column should be dropped automatically and recreated with the schema from the new column.

Currently without this behavior pipelines will potentially break every time new data is added if the column originally was null prior. 

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @Chris_sh, Handling schema mismatches in DLT pipelines can be crucial to ensure smooth data processing. 

 

Let's explore some strategies to address this issue:

 

Schema Evolution and Delta Lake:

  • Delta Lake, an extension of Apache Spark™, provides features for handling schema evolution. It allows you to manage changes to the schema over time without breaking existing pipelines.
  • When new data arrives with a different schema, Delta Lake can automatically handle schema evolution by merging the new schema with the existing one.
  • To enable schema migration using DataFrameWriter or DataStreamWriter, set the option .option("mergeSchema", "true").
  • For other operations, set the session configuration spark.Databricks.delta.schema.autoMerge.enabled ....

Handling Schema Mismatch in DLT Pipelines:

  • In your specific case, schema mismatches can occur when you consume Protobuf-serialized messages and decode them using the Spark from_protobuf function.
  • To handle this gracefully, consider the following approaches:
    • PERMISSIVE Mode: Configure the from_protobuf function to use the PERMISSIVE mode. This way, corrupted Protobuf messages will be processed without throwing an exception:
    • Custom Deserialization: Instead of using a deserializer, consider using a ByteArrayDeserializer and converting it in your listener. This approach might allow you to handle schema mismatches more gracefully.

Testing and Isolation:

Remember that handling schema evolution is essential for maintaining robust data pipelines. 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.