Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

Delta Live Tables use case

New Contributor II

Hi all,

We have the following use case and wondering if DLT is the correct approach.

Landing area with daily dumps of parquet files into our Data Lake container.

The daily dump does a full overwrite of the parquet each time, keeping the same file name.

The idea would be to re-process the whole parquet each time and manage the delta in the bronze table with SCD 2.

Suggestions on the best approach would be helpful.



Community Manager
Community Manager

Hi @FloodyLet’s explore how Delta Lake (DLT) can be a suitable approach for your use case.

  1. Delta Lake Overview:

    • Delta Lake is an open source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. It provides reliability, performance, and scalability for data lakes.
    • It’s built on top of existing data lakes and is compatible with Parquet, ORC, and other data formats.
    • Key features include ACID transactions, schema enforcement, time travel, and data versioning.
  2. Your Use Case:

    • You have a daily dump of Parquet files into your data lake container.
    • The daily dump overwrites the Parquet files each time, but the file names remain the same.
  3. Approaches to Consider:

  4. Recommendation:

    • Considering your scenario, CLONE Parquet or CONVERT TO DELTA could be suitable.
    • If you need incremental support and want to maintain the existing partitioning structure, go for CLONE Parquet.
    • If you prefer a straightforward conversion without incremental support, choose CONVERT TO DELTA.

Remember to adapt the approach based on your specific requirements and workload characteristics. Delta Lake provides flexibility and robustness for managing data in your data lake container. 🌟🔗

For more detailed implementation steps, refer to the Azure Databricks documentation1.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!