Hi @NathanSundarara , regarding your current approach, here are the potential solutions and considerations
- Deduplication: Implement deduplication strategies within your DLT pipeline. For example
clicksDedupDf = (
spark.readStream.table("LIVE.rawClicks")
.withWatermark("clickTimestamp", "5 seconds")
.dropDuplicatesWithinWatermark(["userId", "clickAdId"])
)
- SCD Type 2: If you need to maintain historical changes, consider implementing Slowly Changing Dimension Type 2 (SCD Type 2) logic in your DLT pipeline
Some possible optimizations for performance
- Incremental Processing: Ensure your DLT pipeline is configured for incremental processing where possible
- Partitioning: Properly partition your data based on the timestamp column you're using for updates to improve query performance.
Please let me know if you want to discuss further on any of the above points