โ07-02-2024 01:25 AM
We receive several datasets where the full dump is delivered daily or weekly. What is the best way to ingest this into Databricks using DLT or basic PySpark while adhering to the medallion?
1. If we use AutoLoader into Bronze, We'd end up with incrementing the bronze table with 100,000 rows evey day (with 99% duplicates).
How would we then move changes or additions downstream?
โ07-02-2024 01:41 AM
In case of full loads, you only need to pass
.mode('overwrite')
while writing to your Bronze table. This is not related to Auto Loader.
โ07-02-2024 06:34 AM
Won't this cause troubles with CDC in the silver layer because the entire dataset is new? Or will it remember what lines from the bronze it already has read even though it's overwritten?
โ07-02-2024 06:44 AM
Depends on your logic. The source sends you a full load, this might mean that you need to reprocess everything, also in all downstream layers.
If the source only sends you a full load, because it's not capable of identifying changes then you should do CDC as early as possible. And usually a MERGE INTO with merge-conditions and update-conditions will help you.
โ07-03-2024 05:22 AM
Agree with @Witold to apply CDC as early as possible. Depending on where the initial files get deposited, I'd recommend having an initial raw layer to your medallion which is just your cloud storage account - so each day or week the files get deposited here. From there you can pull it into your Bronze layer using MERGE INTO to only pull the new / latest data
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now