- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2024 02:32 AM
Hello Databricks Community,
I am seeking guidance on handling full load scenarios with Delta Live Tables (DLT) and Unity Catalog. Here’s the situation I’m dealing with:
We have a data folder in Azure Data Lake Storage (ADLS) where we use Auto Loader to ingest the latest data into a Delta Live Table. The data is loaded as a full load from the source system on a daily basis. However, there are two key challenges:
Record Deletion: Records may be deleted in the source system, but these deletions are not reflected in our DLT tables because Auto Loader only captures new and modified records.
Overwriting Data: Since the source system performs full loads daily and there are no primary columns or unique identifiers in the source data, we need to overwrite the existing data in the DLT table to accurately reflect the current state of the source.
Given that we are using Unity Catalog with our DLT tables, I would like to understand the best practices for implementing a full load strategy that allows us to overwrite the entire dataset in our Delta tables. Specifically, I am looking for guidance on:
- How to effectively overwrite data in a Delta Live Table when the source system performs full loads.
- Strategies to ensure that deleted records from the source system are also removed from the Delta table.
Any insights or examples on how to achieve this would be greatly appreciated.
Thank you!
- Labels:
-
Delta Lake
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-14-2024 08:58 AM
To efficiently manage full data loads, we can leverage a regex pattern to dynamically identify the latest data folders within our bronze layer. These folders typically contain the most recent data updates for our tables. By using a Python script, we can pinpoint the correct folder path and integrate it with our Auto Loader function. This ensures that each pipeline run processes only the freshest data.
With this setup, Delta Live Tables will conduct a full refresh of the tables every time the pipeline executes, writing the latest data to the DLT tables. To streamline this process, we will configure Databricks workflows to handle full refresh operations. These workflows will automate and schedule the full data loads, ensuring consistency and reliability. I’ve rigorously tested this approach, and it has proven effective across various scenarios.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2024 05:29 AM
Hi @Retired_mod,
Could you please provide any sample code? It would be a great help in implementing full load data with Unity Catalog in DLT, as I'm having difficulty doing so using the provided link.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2025 12:35 PM
Hi @yvishal519 did you get to know how to do a full load because "spark.readStream" do incremental loads. If I do "spark.read" it creates a materialised view.
What I want is: do a full load each time(no need of scd types) and it should be a streaming table and not a materialised view.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2024 11:53 AM
@Retired_mod I Tried above methods but still its appending records not overwriting as full load, any other way to overwrite existing dlt records with latest data
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-14-2024 08:58 AM
To efficiently manage full data loads, we can leverage a regex pattern to dynamically identify the latest data folders within our bronze layer. These folders typically contain the most recent data updates for our tables. By using a Python script, we can pinpoint the correct folder path and integrate it with our Auto Loader function. This ensures that each pipeline run processes only the freshest data.
With this setup, Delta Live Tables will conduct a full refresh of the tables every time the pipeline executes, writing the latest data to the DLT tables. To streamline this process, we will configure Databricks workflows to handle full refresh operations. These workflows will automate and schedule the full data loads, ensuring consistency and reliability. I’ve rigorously tested this approach, and it has proven effective across various scenarios.

