cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

DLT overwrite part of the table

erigaud
Honored Contributor

Hello !

We're currently building a pipeline of file ingestion using a Delta Live Tables pipeline and autoloader. 

The bronze tables are pretty much the following schema : 

file_name | file_upload_date | colA | colB 

(Well, there are actually 250+ columns but you get the idea)

The bronze table is append only, with possibly some duplicates because some files can be uploaded several times with corrections, but they will have the same name. The logic I'm trying to implement table is the following: 

- A file is loaded in bronze, lets say 500 rows with file_name = file_name_A.csv and the corresponding upload_date (that part is fine, just standard auto-loader)

- In silver we already had some rows (lets say 1000) for that file_name, but an older upload_date. In that case we want to replace all the 1000 rows by the newer 500 rows. 

How would someone go about doing something like this using Delta Live Table ? 

Thank you !

 

1 ACCEPTED SOLUTION

Accepted Solutions

Tharun-Kumar
Honored Contributor II
Honored Contributor II

@erigaud 

Using jobs/workflows would be the right choice for this.

View solution in original post

5 REPLIES 5

Kaniz_Fatma
Community Manager
Community Manager

Hi @erigaud, You need to use the MERGE INTO command to implement the logic you described using Delta Live Tables. This command allows you to update or insert data into a Delta table conditionally and is particularly useful for upserts (i.e., "update or insert") and slowly changing dimensions.

erigaud
Honored Contributor

Hello @Kaniz_Fatma. I am not sure MERGE INTO is the right solution to my problem, as I do not have a unique key in that situation. So I will have many rows with the same file_name that will match many rows in my table. 

Do you have a better solution ?

Tharun-Kumar
Honored Contributor II
Honored Contributor II

@erigaud 

You could get the distinct file name from the new set of records and remove all it's entries from your silver table. We could then have them appended to the silver table.

erigaud
Honored Contributor

@Tharun-Kumar 

That's the solution I was thinking of, but is there a clean way to do that using DLT or should I just use a regular notebook task and simple Delta Tables ? 

Tharun-Kumar
Honored Contributor II
Honored Contributor II

@erigaud 

Using jobs/workflows would be the right choice for this.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group