<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Question on best method to deliver Azure SQL Server data into Databricks Bronze and Silver. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127505#M47989</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/136982"&gt;@Nick_Pacey&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;&lt;BR /&gt;Databricks has recently introduced Lakeflow Connect and it supports ingesting data from SQL Server. I have done some small experimentation which went all well but not at scale. It looks like a very promising option. To note that it is still in public preview.&lt;BR /&gt;&lt;BR /&gt;You can refer to the documentation here:&amp;nbsp;&lt;A href="https://docs.databricks.com/aws/en/ingestion/lakeflow-connect/" target="_blank"&gt;Managed connectors in Lakeflow Connect | Databricks Documentation&lt;/A&gt;&lt;BR /&gt;And if you wanna look into a more detailed doc for SQL Server you can check this article:&lt;BR /&gt;&lt;A href="https://community.databricks.com/t5/technical-blog/efficient-data-ingestion-from-sql-server-with-lakeflow-connect/ba-p/122597" target="_blank"&gt;Efficient Data Ingestion from SQL Server with Lake... - Databricks Community - 122597&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope that helps and I would definitely give it a shot to Lakeflow Connect.&lt;BR /&gt;&lt;BR /&gt;Best, Ilir&lt;/P&gt;</description>
    <pubDate>Tue, 05 Aug 2025 21:14:25 GMT</pubDate>
    <dc:creator>ilir_nuredini</dc:creator>
    <dc:date>2025-08-05T21:14:25Z</dc:date>
    <item>
      <title>Question on best method to deliver Azure SQL Server data into Databricks Bronze and Silver.</title>
      <link>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127191#M47885</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;We have a Azure SQL Server (replicating from an On Prem SQL Server) that is required to be in Databricks bronze and beyond.&lt;/P&gt;&lt;P&gt;This database has 100s of tables that are all required.&amp;nbsp; Size of tables will vary from very small up to the biggest tables 100 million+ rows.&amp;nbsp; Change on biggest tables can be 10,000 rows per hour.&lt;/P&gt;&lt;P&gt;So far, we've been using Lakehouse Federation and materialised view generation via DLT pipelines to deliver SQL data into Databricks, but this scale/change is bigger.&amp;nbsp; We don't believe we can use incremental updates using this method (source doesn't have row tracking available) so we would have to bring in a full load of data into the mat view on every refresh.&amp;nbsp; Is this correct?&lt;/P&gt;&lt;P&gt;We're also looking again at native SQL CDC options.&amp;nbsp; This still seems to have the same limitations when we last looked i.e. you have to set this up for every table (as above, we have over 500 tables) and schema drift takes a fair bit of code and management.&lt;/P&gt;&lt;P&gt;Welcome thoughts and latest ideas on what's the best to handle this from the Databricks end.&amp;nbsp; Do you think our usual method will cope okay with this scale?&amp;nbsp; Are we missing something on MV incremental loads or CDC?&lt;/P&gt;&lt;P&gt;As always, thanks in advance!&lt;/P&gt;&lt;P&gt;Nick&lt;/P&gt;</description>
      <pubDate>Fri, 01 Aug 2025 15:30:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127191#M47885</guid>
      <dc:creator>Nick_Pacey</dc:creator>
      <dc:date>2025-08-01T15:30:51Z</dc:date>
    </item>
    <item>
      <title>Re: Question on best method to deliver Azure SQL Server data into Databricks Bronze and Silver.</title>
      <link>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127505#M47989</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/136982"&gt;@Nick_Pacey&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;&lt;BR /&gt;Databricks has recently introduced Lakeflow Connect and it supports ingesting data from SQL Server. I have done some small experimentation which went all well but not at scale. It looks like a very promising option. To note that it is still in public preview.&lt;BR /&gt;&lt;BR /&gt;You can refer to the documentation here:&amp;nbsp;&lt;A href="https://docs.databricks.com/aws/en/ingestion/lakeflow-connect/" target="_blank"&gt;Managed connectors in Lakeflow Connect | Databricks Documentation&lt;/A&gt;&lt;BR /&gt;And if you wanna look into a more detailed doc for SQL Server you can check this article:&lt;BR /&gt;&lt;A href="https://community.databricks.com/t5/technical-blog/efficient-data-ingestion-from-sql-server-with-lakeflow-connect/ba-p/122597" target="_blank"&gt;Efficient Data Ingestion from SQL Server with Lake... - Databricks Community - 122597&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope that helps and I would definitely give it a shot to Lakeflow Connect.&lt;BR /&gt;&lt;BR /&gt;Best, Ilir&lt;/P&gt;</description>
      <pubDate>Tue, 05 Aug 2025 21:14:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127505#M47989</guid>
      <dc:creator>ilir_nuredini</dc:creator>
      <dc:date>2025-08-05T21:14:25Z</dc:date>
    </item>
    <item>
      <title>Re: Question on best method to deliver Azure SQL Server data into Databricks Bronze and Silver.</title>
      <link>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127515#M47995</link>
      <description>&lt;P&gt;Hey Nick,&lt;/P&gt;&lt;P&gt;Have you tried the SQL Server connector with Lakeflow Connect? This should provide native connection to your SQL server, potentially allowing for incremental updates and CDC setup.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/ingestion/lakeflow-connect/sql-server-pipeline" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/ingestion/lakeflow-connect/sql-server-pipeline&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I haven’t tried this connector before but it seems like a good first thing to try for your case.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Kerem Durak&lt;/P&gt;</description>
      <pubDate>Tue, 05 Aug 2025 23:08:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/question-on-best-method-to-deliver-azure-sql-server-data-into/m-p/127515#M47995</guid>
      <dc:creator>kerem</dc:creator>
      <dc:date>2025-08-05T23:08:50Z</dc:date>
    </item>
  </channel>
</rss>

