<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Lakehouse federation bringing data from SQL Server in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/lakehouse-federation-bringing-data-from-sql-server/m-p/103879#M41586</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/60745"&gt;@NathanSundarara&lt;/a&gt;&amp;nbsp;, regarding your current approach, here are the potential solutions and considerations&lt;BR /&gt;-&amp;nbsp;&lt;SPAN&gt;&lt;STRONG&gt;Deduplication&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Implement deduplication strategies within your DLT pipeline&lt;/SPAN&gt;&lt;LI-WRAPPER&gt;&lt;SPAN&gt;&lt;SPAN class="whitespace-nowrap"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;For example&lt;/SPAN&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;clicksDedupDf = (
  spark.readStream.table("LIVE.rawClicks")
  .withWatermark("clickTimestamp", "5 seconds")
  .dropDuplicatesWithinWatermark(["userId", "clickAdId"])
)&lt;/LI-CODE&gt;
&lt;P&gt;-&amp;nbsp;&lt;SPAN&gt;&lt;STRONG&gt;SCD Type 2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: If you need to maintain historical changes, consider implementing Slowly Changing Dimension Type 2 (SCD Type 2) logic in your DLT pipeline&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Some possible optimizations for performance&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;- Incremental Processing&lt;/STRONG&gt;: Ensure your DLT pipeline is configured for incremental processing where possible&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;- Partitioning&lt;/STRONG&gt;: Properly partition your data based on the timestamp column you're using for updates to improve query performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Please let me know if you want to discuss further on any of the above points&lt;/P&gt;</description>
    <pubDate>Thu, 02 Jan 2025 10:13:13 GMT</pubDate>
    <dc:creator>Nam_Nguyen</dc:creator>
    <dc:date>2025-01-02T10:13:13Z</dc:date>
    <item>
      <title>Lakehouse federation bringing data from SQL Server</title>
      <link>https://community.databricks.com/t5/data-engineering/lakehouse-federation-bringing-data-from-sql-server/m-p/51018#M28937</link>
      <description>&lt;P&gt;Did any one tried to bring data using the newly announced Lakehouse federation and ingest using DELTA LIVE TABLES? I'm currently testing using Materialized Views. First loaded the full data and now loading last 3 days daily and recomputing using Materialized views. At this time materialized view is doing full recompute. Some of the records may be already existing in the current materialized view , we are doing window functions to recompute and keep last record based on time stamp. Tried to do DLT using Apply changes it gives error because the data changed so looking for options.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 13 Nov 2023 01:48:23 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakehouse-federation-bringing-data-from-sql-server/m-p/51018#M28937</guid>
      <dc:creator>NathanSundarara</dc:creator>
      <dc:date>2023-11-13T01:48:23Z</dc:date>
    </item>
    <item>
      <title>Re: Lakehouse federation bringing data from SQL Server</title>
      <link>https://community.databricks.com/t5/data-engineering/lakehouse-federation-bringing-data-from-sql-server/m-p/103879#M41586</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/60745"&gt;@NathanSundarara&lt;/a&gt;&amp;nbsp;, regarding your current approach, here are the potential solutions and considerations&lt;BR /&gt;-&amp;nbsp;&lt;SPAN&gt;&lt;STRONG&gt;Deduplication&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: Implement deduplication strategies within your DLT pipeline&lt;/SPAN&gt;&lt;LI-WRAPPER&gt;&lt;SPAN&gt;&lt;SPAN class="whitespace-nowrap"&gt;.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;For example&lt;/SPAN&gt;&lt;/LI-WRAPPER&gt;&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;clicksDedupDf = (
  spark.readStream.table("LIVE.rawClicks")
  .withWatermark("clickTimestamp", "5 seconds")
  .dropDuplicatesWithinWatermark(["userId", "clickAdId"])
)&lt;/LI-CODE&gt;
&lt;P&gt;-&amp;nbsp;&lt;SPAN&gt;&lt;STRONG&gt;SCD Type 2&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;SPAN&gt;: If you need to maintain historical changes, consider implementing Slowly Changing Dimension Type 2 (SCD Type 2) logic in your DLT pipeline&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Some possible optimizations for performance&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;- Incremental Processing&lt;/STRONG&gt;: Ensure your DLT pipeline is configured for incremental processing where possible&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;STRONG&gt;- Partitioning&lt;/STRONG&gt;: Properly partition your data based on the timestamp column you're using for updates to improve query performance.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;Please let me know if you want to discuss further on any of the above points&lt;/P&gt;</description>
      <pubDate>Thu, 02 Jan 2025 10:13:13 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakehouse-federation-bringing-data-from-sql-server/m-p/103879#M41586</guid>
      <dc:creator>Nam_Nguyen</dc:creator>
      <dc:date>2025-01-02T10:13:13Z</dc:date>
    </item>
  </channel>
</rss>

