cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Prevent Duplicate Entries to enter to delta lake of Azure Storage

User16826994223
Honored Contributor III

I Have a Dataframe stored in the format of delta into Adls, now when im trying to append new updated rows to that delta lake it should, Is there any way where i can delete the old existing record in delta and add the new updated Record.

There is a unique Column for the schema of DataFrame stored in Delta. by which we can check whether the record is updated or new.

2 REPLIES 2

Ryan_Chynoweth
Honored Contributor III

You should use a MERGE command on this table to match records on the unique column. Delta Lake does not enforce primary keys so if you append only the duplicate ids will appear.

Merge will provide you the functionality you desire.

https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-merge-into.html

According to the documentation, COPY INTO is supposed to be idempotent, and on successive runs, it shouldn't be reloading already loaded files. In my case, I created a table from existing data in S3 (many files). Then, hoping to load only newly arrived files (batch ingestion), I tried COPY INTO, but it went ahead and naively reloaded everything from S3.

I also tried with MERGE, but it looks like the source can't be parquet files in S3, it can only be similarly a Delta Table?

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!