<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Lakeflow Connect: Data Ingestion from SQL Server to Databricks in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155731#M54299</link>
    <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/149095"&gt;@shan-databricks&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;One additional point, I would also validate the expected load with the SQL Server DBA because even if Lakeflow manages the parallelism internally the source SQL Server still needs to handle those concurrent reads. For 100 tables, I would start with one pipeline/gateway, monitor extraction duration and SQL Server load, then only split into multiple pipelines/gateways if there is a clear operational need such as different refresh SLAs, very large tables, or source side throttling. Don't forget that for tables with primary keys CT is generally preferred over CDC to reduce source overhead.&lt;/P&gt;</description>
    <pubDate>Tue, 28 Apr 2026 18:12:04 GMT</pubDate>
    <dc:creator>amirabedhiafi</dc:creator>
    <dc:date>2026-04-28T18:12:04Z</dc:date>
    <item>
      <title>Lakeflow Connect: Data Ingestion from SQL Server to Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155656#M54289</link>
      <description>&lt;P&gt;&lt;SPAN&gt;We have a use case to ingest data from SQL Server into Databricks using Lakeflow Connect. There are 100 tables, and on a daily basis we will perform inserts, updates, and deletes based on CDC data. For this requirement, how can we enable multiple parallel connections to the SQL Server database?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 10:02:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155656#M54289</guid>
      <dc:creator>shan-databricks</dc:creator>
      <dc:date>2026-04-28T10:02:43Z</dc:date>
    </item>
    <item>
      <title>Re: Lakeflow Connect: Data Ingestion from SQL Server to Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155689#M54295</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/149095"&gt;@shan-databricks&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Databricks recommends up to ~250 tables per pipeline, so 100 is well within guidance. Lakeflow Connect doesn’t offer a user-facing control for multiple parallel connections. Instead, configure a single SQL Server gateway with sufficient cores. Databricks automatically manages the parallel JDBC connections from the gateway to your SQL Server.&lt;/P&gt;
&lt;P&gt;When you give it enough cores (via the gateway’s compute policy/node sizes) to let Databricks scale extraction in parallel... the platform then opens and manages multiple JDBC connections internally.&amp;nbsp;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 13:54:43 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155689#M54295</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-04-28T13:54:43Z</dc:date>
    </item>
    <item>
      <title>Re: Lakeflow Connect: Data Ingestion from SQL Server to Databricks</title>
      <link>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155731#M54299</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/149095"&gt;@shan-databricks&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;One additional point, I would also validate the expected load with the SQL Server DBA because even if Lakeflow manages the parallelism internally the source SQL Server still needs to handle those concurrent reads. For 100 tables, I would start with one pipeline/gateway, monitor extraction duration and SQL Server load, then only split into multiple pipelines/gateways if there is a clear operational need such as different refresh SLAs, very large tables, or source side throttling. Don't forget that for tables with primary keys CT is generally preferred over CDC to reduce source overhead.&lt;/P&gt;</description>
      <pubDate>Tue, 28 Apr 2026 18:12:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakeflow-connect-data-ingestion-from-sql-server-to-databricks/m-p/155731#M54299</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-04-28T18:12:04Z</dc:date>
    </item>
  </channel>
</rss>

