Hi,
We've set up materialized views (as dlt.table()) for something like 300 tables in a single Lakeflow pipeline. The pipeline is triggered externally by a workflow job (to run twice a day). Running the pipeline we experience something strange. A large number of tables fail to update with a MetadataChangedException. The number of tables that fail varies from run to run, but also which tables fail varies. What puzzles us most is that the concurrent metadata write is done by the same pipeline run. I.e., the pipeline run seems to work on the same table in two threads concurrently. The common property of the failing tables is that they do not receive any new data. But this condition alone is insufficient. Many tables not receiving any new data are processed successfully.
The DataBricks AI recommendation is to use a retry mechanism for setting up the table. But adding one does not make any difference. Tables keep failing to update.
Any idea what goes on here? Any help is much appreciated.
Thanks, Stephan