cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Updates of Materialized Views in Lakeflow Pipelines Produce MetadataChangedException the masses

StephanK8
New Contributor

Hi,

We've set up materialized views (as dlt.table()) for something like 300 tables in a single Lakeflow pipeline. The pipeline is triggered externally by a workflow job (to run twice a day). Running the pipeline we experience something strange. A large number of tables fail to update with a MetadataChangedException. The number of tables that fail varies from run to run, but also which tables fail varies. What puzzles us most is that the concurrent metadata write is done by the same pipeline run. I.e., the pipeline run seems to work on the same table in two threads concurrently. The common property of the failing tables is that they do not receive any new data. But this condition alone is insufficient. Many tables not receiving any new data are processed successfully. 

The DataBricks AI recommendation is to use a retry mechanism for setting up the table. But adding one does not make any difference. Tables keep failing to update.

Any idea what goes on here? Any help is much appreciated.

Thanks, Stephan

2 REPLIES 2

Krishna_S
Databricks Employee
Databricks Employee

The issue could be with how comment updates are being handled. The problem arises because comment updates are executed through the ALTER TABLE command, which modifies table metadata. When multiple transactions attempt to update metadata at the same timeโ€”such as simultaneous comment updates and merge operations on the same columnโ€”concurrency conflicts can occur, leading to the observed exceptions. Since ALTER TABLE operations (including SET TBLPROPERTIES and CHANGE COLUMN) directly modify metadata, concurrent attempts to update these properties are especially prone to conflicts. To mitigate this, I suggest avoiding concurrent comment updates on the same column wherever possible and implementing retry logic if the issue is intermittent. Can you try this out? If this did not work out, please send the full error stack logs.

mark_ott
Databricks Employee
Databricks Employee

Workarounds & Recommendations

  • Limit Pipeline Parallelism: Modify the pipeline's configuration to reduce the maximum concurrency for DLT task execution, forcing more serialized or grouped updates.

  • Restructure Pipeline Graph: Instead of 300+ separate materialized views, consider batching tables with no new data into fewer logical processing stages or introducing dummy dependencies to force sequential operation.

  • DLT Table Options: For tables that frequently have no new data, use DLT or Delta Lake configuration options to skip "empty commits" or checkpoint updates unless new data is present, where available.

  • Databricks Contact: If the issue persists after tuning concurrency, coordinate with Databricks support to review orchestration logs for deeper root cause analysis, as it may uncover a bug or undocumented behavior in Lakeflow for high-output-count pipelines.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now