10-05-2023 01:19 AM
Hello !
We're currently building a pipeline of file ingestion using a Delta Live Tables pipeline and autoloader.
The bronze tables are pretty much the following schema :
file_name | file_upload_date | colA | colB
(Well, there are actually 250+ columns but you get the idea)
The bronze table is append only, with possibly some duplicates because some files can be uploaded several times with corrections, but they will have the same name. The logic I'm trying to implement table is the following:
- A file is loaded in bronze, lets say 500 rows with file_name = file_name_A.csv and the corresponding upload_date (that part is fine, just standard auto-loader)
- In silver we already had some rows (lets say 1000) for that file_name, but an older upload_date. In that case we want to replace all the 1000 rows by the newer 500 rows.
How would someone go about doing something like this using Delta Live Table ?
Thank you !
10-11-2023 02:50 AM
10-05-2023 12:16 PM
Hello @Retired_mod. I am not sure MERGE INTO is the right solution to my problem, as I do not have a unique key in that situation. So I will have many rows with the same file_name that will match many rows in my table.
Do you have a better solution ?
10-05-2023 09:36 PM
You could get the distinct file name from the new set of records and remove all it's entries from your silver table. We could then have them appended to the silver table.
10-05-2023 10:57 PM
That's the solution I was thinking of, but is there a clean way to do that using DLT or should I just use a regular notebook task and simple Delta Tables ?
10-11-2023 02:50 AM
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group