<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Lakebridge reconciliation code keeps running continuously without Spark jobs or errors in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156462#M54426</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I am facing an issue while running the Lakebridge reconciliation code in Databricks using TriggerReconService.trigger_recon().&lt;/P&gt;&lt;P&gt;The code keeps running continuously without any output, error, or logs. Also, no Spark jobs are getting triggered or shown in the Spark UI.&lt;/P&gt;&lt;P&gt;I also added exception handling using ReconciliationException and generic Exception, but no exception is being thrown. The process seems to hang during the TriggerReconService.trigger_recon() execution itself.&lt;/P&gt;&lt;P&gt;The cluster is running properly and the notebook is attached correctly, so it looks like the process is getting stuck before Spark execution starts.&lt;/P&gt;&lt;P&gt;Has anyone faced a similar issue specifically during the Lakebridge reconciliation execution?&lt;/P&gt;&lt;P&gt;Any debugging suggestions or checks would be really helpful.&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
    <pubDate>Fri, 08 May 2026 15:46:46 GMT</pubDate>
    <dc:creator>Akshay_Petkar</dc:creator>
    <dc:date>2026-05-08T15:46:46Z</dc:date>
    <item>
      <title>Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156462#M54426</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I am facing an issue while running the Lakebridge reconciliation code in Databricks using TriggerReconService.trigger_recon().&lt;/P&gt;&lt;P&gt;The code keeps running continuously without any output, error, or logs. Also, no Spark jobs are getting triggered or shown in the Spark UI.&lt;/P&gt;&lt;P&gt;I also added exception handling using ReconciliationException and generic Exception, but no exception is being thrown. The process seems to hang during the TriggerReconService.trigger_recon() execution itself.&lt;/P&gt;&lt;P&gt;The cluster is running properly and the notebook is attached correctly, so it looks like the process is getting stuck before Spark execution starts.&lt;/P&gt;&lt;P&gt;Has anyone faced a similar issue specifically during the Lakebridge reconciliation execution?&lt;/P&gt;&lt;P&gt;Any debugging suggestions or checks would be really helpful.&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Fri, 08 May 2026 15:46:46 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156462#M54426</guid>
      <dc:creator>Akshay_Petkar</dc:creator>
      <dc:date>2026-05-08T15:46:46Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156512#M54440</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&amp;nbsp;!&lt;/P&gt;&lt;P&gt;I had something similar and at that time I understood that it is&amp;nbsp;an&amp;nbsp;initialization issue and&amp;nbsp;not a reconciliation performance issue.&lt;/P&gt;&lt;P&gt;Why ? because lakebridge reconciliation should eventually execute spark actions when it fetches schemas or data and writes reconciliation metadata. The flow runs TriggerReconService.trigger_recon(...) from the notebook and Lakebridge stores reconciliation output in its metadata catalog or schema after the run.&lt;/P&gt;&lt;P&gt;Check the package :&lt;/P&gt;&lt;PRE&gt;import databricks.labs.lakebridge as lb
print(lb.__version__)&lt;/PRE&gt;&lt;P&gt;The latest PyPI version currently shown is 0.12.2, released Feb 26, 2026.&amp;nbsp;&lt;A href="https://pypi.org/project/databricks-labs-lakebridge" target="_blank"&gt;https://pypi.org/project/databricks-labs-lakebridge&lt;/A&gt;&amp;nbsp;and this is relevant because recent releases improved reconciliation exception handling and logging and even serverless compatibility.&amp;nbsp;&lt;A href="https://github.com/databrickslabs/remorph/releases" target="_blank"&gt;https://github.com/databrickslabs/remorph/releases&lt;/A&gt;&lt;/P&gt;&lt;P&gt;I had to enable debug logging before calling trigger_recon() and test basic spark and workspace client separately and I found that&amp;nbsp;it was hanging(so here the issue is not lakebridge reconciliation)&lt;/P&gt;&lt;P&gt;Lakebridge requires permissions to create or use reconciliation metadata objects such as USE CATALOG, CREATE SCHEMA&amp;nbsp;and if using an existing schema, volume permissions.&lt;/P&gt;&lt;P&gt;What I discovered at that time that lakebridge uses UC volumes for intermediate persistence on serverless clusters and the volume must exist with write permissions.&lt;/P&gt;</description>
      <pubDate>Sun, 10 May 2026 15:45:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156512#M54440</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-05-10T15:45:55Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156529#M54443</link>
      <description>&lt;P&gt;Thanks! We are now able to see the debug logs and the code is progressing further than before.&lt;/P&gt;&lt;P&gt;Schema fetch is working successfully for both Synapse and Databricks. However, the Spark jobs are getting stuck with:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;0 rows read&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;0 bytes read&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;0 bytes written&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;The notebook cell keeps running continuously without completing.&lt;/P&gt;&lt;P&gt;We are using:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;an all-purpose cluster&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;report_type = row&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;and the table contains only around 1000 rows&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;Has anyone seen Spark jobs remain in this state during Lakebridge reconciliation?&lt;/P&gt;&lt;P&gt;Any suggestions on possible causes or debugging steps would be very helpful.&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 07:36:48 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156529#M54443</guid>
      <dc:creator>Akshay_Petkar</dc:creator>
      <dc:date>2026-05-11T07:36:48Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156549#M54446</link>
      <description>&lt;P&gt;We also tried multiple configurations including:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;report_type = row&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;report_type = all&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;report_type = schema&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;But in all cases, the Spark job eventually gets stuck at some point while the cell keeps running continuously without completing.&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 09:18:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156549#M54446</guid>
      <dc:creator>Akshay_Petkar</dc:creator>
      <dc:date>2026-05-11T09:18:27Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156575#M54451</link>
      <description>&lt;P&gt;Could you please confirm whether Lakebridge reconciliation requires all connection properties to be stored in Key Vault/secrets, including:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;P&gt;username&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;password&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;host&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;port&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;database&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;encrypt&lt;/P&gt;&lt;/LI&gt;&lt;LI&gt;&lt;P&gt;trustServerCertificate&lt;/P&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;We are asking because reconciliation is generating queries with values like None.database_name.table_name, so we are trying to understand whether some connection properties are not being resolved correctly from secrets/configuration.&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 12:23:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156575#M54451</guid>
      <dc:creator>Akshay_Petkar</dc:creator>
      <dc:date>2026-05-11T12:23:44Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156607#M54454</link>
      <description>&lt;P&gt;Could you give an example of the config files you have set up for running the reconciliation? The file determines most of the settings - so without it it is hard to assist you&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 19:19:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156607#M54454</guid>
      <dc:creator>KrisJohannesen</dc:creator>
      <dc:date>2026-05-11T19:19:04Z</dc:date>
    </item>
    <item>
      <title>Re: Lakebridge reconciliation code keeps running continuously without Spark jobs or errors</title>
      <link>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156611#M54455</link>
      <description>&lt;P&gt;Hi again&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/88335"&gt;@Akshay_Petkar&lt;/a&gt;&amp;nbsp; !&lt;/P&gt;&lt;P&gt;I don't think that&amp;nbsp;all values must be stored in Azure Key Vault. it depends on which Lakebridge reconciliation config version or path you are using.&lt;/P&gt;&lt;P&gt;For the older notebook style config, Lakebridge has a secret_scope for source connection credentials and DatabaseConfig still needs the source or target catalog and schema values. If you check the doc&amp;nbsp;secret_scope, source_schema, target_catalog, target_schema&amp;nbsp;and optional source_catalog are part of the reconciliation config. &lt;A title="https://databrickslabs.github.io/lakebridge/docs/reconcile/recon_notebook/" href="https://databrickslabs.github.io/lakebridge/docs/reconcile/recon_notebook/" target="_self"&gt;https://databrickslabs.github.io/lakebridge/docs/reconcile/recon_notebook/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;The important clue is this&amp;nbsp;None.database_name.table_name&amp;nbsp;whichnormally means one of the catalog values is being resolved as none&amp;nbsp;not that Spark is slow.&lt;/P&gt;</description>
      <pubDate>Mon, 11 May 2026 19:46:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/lakebridge-reconciliation-code-keeps-running-continuously/m-p/156611#M54455</guid>
      <dc:creator>amirabedhiafi</dc:creator>
      <dc:date>2026-05-11T19:46:02Z</dc:date>
    </item>
  </channel>
</rss>

