07-30-2024 02:32 AM
Hello Databricks Community,
I am seeking guidance on handling full load scenarios with Delta Live Tables (DLT) and Unity Catalog. Here’s the situation I’m dealing with:
We have a data folder in Azure Data Lake Storage (ADLS) where we use Auto Loader to ingest the latest data into a Delta Live Table. The data is loaded as a full load from the source system on a daily basis. However, there are two key challenges:
Record Deletion: Records may be deleted in the source system, but these deletions are not reflected in our DLT tables because Auto Loader only captures new and modified records.
Overwriting Data: Since the source system performs full loads daily and there are no primary columns or unique identifiers in the source data, we need to overwrite the existing data in the DLT table to accurately reflect the current state of the source.
Given that we are using Unity Catalog with our DLT tables, I would like to understand the best practices for implementing a full load strategy that allows us to overwrite the entire dataset in our Delta tables. Specifically, I am looking for guidance on:
Any insights or examples on how to achieve this would be greatly appreciated.
Thank you!
08-14-2024 08:58 AM
To efficiently manage full data loads, we can leverage a regex pattern to dynamically identify the latest data folders within our bronze layer. These folders typically contain the most recent data updates for our tables. By using a Python script, we can pinpoint the correct folder path and integrate it with our Auto Loader function. This ensures that each pipeline run processes only the freshest data.
With this setup, Delta Live Tables will conduct a full refresh of the tables every time the pipeline executes, writing the latest data to the DLT tables. To streamline this process, we will configure Databricks workflows to handle full refresh operations. These workflows will automate and schedule the full data loads, ensuring consistency and reliability. I’ve rigorously tested this approach, and it has proven effective across various scenarios.
07-31-2024 09:09 PM
Hi @yvishal519,
overwrite
mode in your write operations. This will ensure that the existing data in the Delta table is replaced with the new data from the so....MERGE
statement to synchronize the Delta table with the source data. This approach ensures that any records deleted in the source system are also removed from the Delta ...e..option("mergeSchema", "true").
08-02-2024 05:00 AM
Hi @yvishal519, Thank you for reaching out to our community! We're here to help you.
To ensure we provide you with the best support, could you please take a moment to review the response and choose the one that best answers your question? Your feedback not only helps us assist you better but also benefits other community members who may have similar questions in the future.
If you found the answer helpful, consider giving it a kudo. If the response fully addresses your question, please mark it as the accepted solution. This will help us close the thread and ensure your question is resolved.
We appreciate your participation and are here to assist you further if you need it!
08-02-2024 05:29 AM
Hi @Kaniz_Fatma,
Could you please provide any sample code? It would be a great help in implementing full load data with Unity Catalog in DLT, as I'm having difficulty doing so using the provided link.
08-02-2024 11:53 AM
@Kaniz_Fatma I Tried above methods but still its appending records not overwriting as full load, any other way to overwrite existing dlt records with latest data
08-14-2024 08:58 AM
To efficiently manage full data loads, we can leverage a regex pattern to dynamically identify the latest data folders within our bronze layer. These folders typically contain the most recent data updates for our tables. By using a Python script, we can pinpoint the correct folder path and integrate it with our Auto Loader function. This ensures that each pipeline run processes only the freshest data.
With this setup, Delta Live Tables will conduct a full refresh of the tables every time the pipeline executes, writing the latest data to the DLT tables. To streamline this process, we will configure Databricks workflows to handle full refresh operations. These workflows will automate and schedule the full data loads, ensuring consistency and reliability. I’ve rigorously tested this approach, and it has proven effective across various scenarios.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group