cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DLT bronze tables

Faisal
Contributor

I am trying to ingest incremental parquet files data to bronze streaming table, how much history data should be retained ideally in bronze layer as a general best practise considering I will be only using bronze to ingest source data and move it to silver streaming tables using APPLY_CHANGES_INTO?

1 REPLY 1

MuthuLakshmi
New Contributor III

The amount of history data that should be retained in the bronze layer depends on your specific use case and requirements. As a general best practice, you should retain enough history data to support your downstream analytics and machine learning workloads, while also considering the cost and performance implications of storing and processing large amounts of data.

One approach to managing historical data in the bronze layer is to use partitioning and time-based data retention policies. For example, you can partition your data by date or time, and then use a retention policy to automatically delete or archive old partitions after a certain period of time. This can help you manage the size of your data lake and reduce storage costs, while still retaining enough historical data to support your use cases.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group