cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

DLT Dedupping Best Practice in Medallion

ChristianRRL
Valued Contributor

Hi there, I have what may be a deceptively simple question but I suspect may have a variety of answers:

  • What is the "right" place to handle dedupping using the medallion architecture?

In my example, I already have everything properly laid out with data arriving in a `landing` location, and I even have a DLT job that can loop through all respective source CSV > target DELTA tables. At the moment, I have the data come in entirely as the raw CSVs into a bronze delta table (DLT Streaming) and there is no dedupping done whatsoever here. If the same data is sent via two differently timestamped CSV's, *all* of the data will show in bronze.

My current intent is to have all the raw data arrive in bronze, and then I'll dedup it in a second silver delta table (DLT Streaming).

Does this make sense? I'm curious if others handle this the same way, or if it is more common practice to handle dedupping in the bronze table instead?

2 REPLIES 2

cgrant
Databricks Employee
Databricks Employee

A typical recommendation is to not do any transformations as the data lands into the bronze layer (ELT). The idea is that you want your bronze layer to be as close of a representation of your source data as possible so if there are any mistakes later, or taking your example, the existence of duplicates is a helpful indicator of some kind, it's nice to have an accurate system of record.

So in your example, raw data lands in bronze as it is and is deduplicated in the silver layer. These are not hard and fast rules - they are up to your practice.

Sidhant07
Databricks Employee
Databricks Employee

1. Deduplication in medallion architecture can be handled in bronze or silver layer.
2. If keeping a complete history of all raw data, including duplicates, in the bronze layer, handle deduplication in the silver layer.
3. If not keeping a complete history of all raw data, including duplicates, in the bronze layer, handle deduplication in the bronze layer to reduce data processed in the silver layer.
4. Consider data volume, performance, cost, and data quality when deciding where to handle deduplication.
5. In your use case, handling deduplication in the silver layer is valid, but consider moving it to the bronze layer if processing a large amount of duplicate data in the silver layer.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group