cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to Prevent Duplicate Entries to enter to delta lake of Azure Storage

User16826994223
Honored Contributor III

I Have a Dataframe stored in the format of delta into Adls, now when im trying to append new updated rows to that delta lake it should, Is there any way where i can delete the old existing record in delta and add the new updated Record.

There is a unique Column for the schema of DataFrame stored in Delta. by which we can check whether the record is updated or new.

2 REPLIES 2

Ryan_Chynoweth
Esteemed Contributor

You should use a MERGE command on this table to match records on the unique column. Delta Lake does not enforce primary keys so if you append only the duplicate ids will appear.

Merge will provide you the functionality you desire.

https://docs.databricks.com/spark/latest/spark-sql/language-manual/delta-merge-into.html

According to the documentation, COPY INTO is supposed to be idempotent, and on successive runs, it shouldn't be reloading already loaded files. In my case, I created a table from existing data in S3 (many files). Then, hoping to load only newly arrived files (batch ingestion), I tried COPY INTO, but it went ahead and naively reloaded everything from S3.

I also tried with MERGE, but it looks like the source can't be parquet files in S3, it can only be similarly a Delta Table?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group