Lakehouse federation bringing data from SQL Server
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-12-2023 05:48 PM
Did any one tried to bring data using the newly announced Lakehouse federation and ingest using DELTA LIVE TABLES? I'm currently testing using Materialized Views. First loaded the full data and now loading last 3 days daily and recomputing using Materialized views. At this time materialized view is doing full recompute. Some of the records may be already existing in the current materialized view , we are doing window functions to recompute and keep last record based on time stamp. Tried to do DLT using Apply changes it gives error because the data changed so looking for options.
- Labels:
-
Delta Lake
-
Workflows
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-02-2025 02:13 AM
Hi @NathanSundarara , regarding your current approach, here are the potential solutions and considerations
- Deduplication: Implement deduplication strategies within your DLT pipeline
clicksDedupDf = (
spark.readStream.table("LIVE.rawClicks")
.withWatermark("clickTimestamp", "5 seconds")
.dropDuplicatesWithinWatermark(["userId", "clickAdId"])
)
- SCD Type 2: If you need to maintain historical changes, consider implementing Slowly Changing Dimension Type 2 (SCD Type 2) logic in your DLT pipeline
Some possible optimizations for performance
- Incremental Processing: Ensure your DLT pipeline is configured for incremental processing where possible
- Partitioning: Properly partition your data based on the timestamp column you're using for updates to improve query performance.
Please let me know if you want to discuss further on any of the above points

