cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Prevent Duplicate Entries to enter to delta lake of Azure Storage

User16826994223
Honored Contributor III

I Have a Dataframe stored in the format of delta into Adls, now when im trying to append new updated rows to that delta lake it should, Is there any way where i can delete the old existing record in delta and add the new updated Record.

There is a unique Column for the schema of DataFrame stored in Delta. by which we can check whether the record is updated or new.

2 REPLIES 2

Ryan_Chynoweth
Honored Contributor III

You should use a MERGE command on this table to match records on the unique column. Delta Lake does not enforce primary keys so if you append only the duplicate ids will appear.

Merge will provide you the functionality you desire.

https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-merge-into.html

According to the documentation, COPY INTO is supposed to be idempotent, and on successive runs, it shouldn't be reloading already loaded files. In my case, I created a table from existing data in S3 (many files). Then, hoping to load only newly arrived files (batch ingestion), I tried COPY INTO, but it went ahead and naively reloaded everything from S3.

I also tried with MERGE, but it looks like the source can't be parquet files in S3, it can only be similarly a Delta Table?

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.