<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Supporting File unrecognition in DLT Pipeline. in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/supporting-file-unrecognition-in-dlt-pipeline/m-p/155426#M54242</link>
    <description>&lt;P&gt;We have a dlt pipeline which creates some same table, which are created based on some transformation and those transformation are kept inside a function in a seperate file. and those file were used using import function.&lt;BR /&gt;we are deploying those changes via terraform into the databricks, now the problem was everytime you run the dlt pipeline it will execute the pipeline without any issue, some time later on if deploy some code 'A'changes then it shows suppoerting file does not exists, but if we redeploy the changes we were able to run the pipeline.&lt;BR /&gt;&lt;BR /&gt;But in one of the DLT pipeline, we are using retry_on_failure in pipeline, there if there was any similar issue then pipeline will fail at first but eventullay it will successed in the next run which got triggered by the above option.&lt;BR /&gt;&lt;BR /&gt;Now my question was after deploying the terraform if it fails for the 1st run then we do a manual refresh which is similar to retry_on_failure, but still it fails what could be the reason and does retry_on_failure do something more then just a refresh?&lt;/P&gt;</description>
    <pubDate>Fri, 24 Apr 2026 08:31:12 GMT</pubDate>
    <dc:creator>Muralidharan_A</dc:creator>
    <dc:date>2026-04-24T08:31:12Z</dc:date>
    <item>
      <title>Supporting File unrecognition in DLT Pipeline.</title>
      <link>https://community.databricks.com/t5/data-engineering/supporting-file-unrecognition-in-dlt-pipeline/m-p/155426#M54242</link>
      <description>&lt;P&gt;We have a dlt pipeline which creates some same table, which are created based on some transformation and those transformation are kept inside a function in a seperate file. and those file were used using import function.&lt;BR /&gt;we are deploying those changes via terraform into the databricks, now the problem was everytime you run the dlt pipeline it will execute the pipeline without any issue, some time later on if deploy some code 'A'changes then it shows suppoerting file does not exists, but if we redeploy the changes we were able to run the pipeline.&lt;BR /&gt;&lt;BR /&gt;But in one of the DLT pipeline, we are using retry_on_failure in pipeline, there if there was any similar issue then pipeline will fail at first but eventullay it will successed in the next run which got triggered by the above option.&lt;BR /&gt;&lt;BR /&gt;Now my question was after deploying the terraform if it fails for the 1st run then we do a manual refresh which is similar to retry_on_failure, but still it fails what could be the reason and does retry_on_failure do something more then just a refresh?&lt;/P&gt;</description>
      <pubDate>Fri, 24 Apr 2026 08:31:12 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/supporting-file-unrecognition-in-dlt-pipeline/m-p/155426#M54242</guid>
      <dc:creator>Muralidharan_A</dc:creator>
      <dc:date>2026-04-24T08:31:12Z</dc:date>
    </item>
    <item>
      <title>Re: Supporting File unrecognition in DLT Pipeline.</title>
      <link>https://community.databricks.com/t5/data-engineering/supporting-file-unrecognition-in-dlt-pipeline/m-p/155443#M54247</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/228160"&gt;@Muralidharan_A&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;To your question about whether retry_on_failure does more than a manual refresh, the answer is yes!&lt;/P&gt;
&lt;P&gt;retry_on_failure (along with pipelines.numUpdateRetryAttempts and pipelines.maxFlowRetryAttempts) performs classified, timed retries within the same update on&amp;nbsp;the same cluster. A manual Refresh is a brand-new update with none of that handling. &lt;BR /&gt;&lt;BR /&gt;Lakeflow Spark Declarative Pipelines only auto-retries errors it tags as retryable (transient I/O, library resolution, file-system races). A manual Refresh reruns&amp;nbsp;regardless of error type, so a deterministic failure will fail again. The retry fires seconds later, by which time the supporting file has usually propagated. Manual Refresh triggered immediately after the failure re-enters the same race.&lt;/P&gt;
&lt;P&gt;So, in your case... after a Terraform deploy, there's a brief window where the pipeline definition is live, but the imported Python file isn't fully visible to the DLT cluster. The&amp;nbsp;first run fails with "supporting file does not exist."... retry_on_failure waits and retries within the same update, by which point the file has propagated. Manual refresh starts a new update too quickly and hits the same problem, so it keeps failing until you redeploy (which effectively gives the file system enough time to catch up).&lt;/P&gt;
&lt;P&gt;The best thing to do would be to add depends_on in Terraform so the pipeline resource waits for the supporting files/wheels to exist before creation or update. You can also declare the helper code as a pipeline library (wheel via libraries { whl = ... } or direct notebook/file reference) instead of an ad-hoc import. This makes&amp;nbsp;Spark Declarative Pipelines aware of the dependency at definition time rather than discovering it at import time.&lt;/P&gt;
&lt;P&gt;Another tip is to set pipelines.numUpdateRetryAttempts and/or pipelines.maxFlowRetryAttempts in all pipeline configs so transient deploy-time races self-heal without manual&amp;nbsp;intervention.&lt;/P&gt;
&lt;P&gt;If you keep import, consider %run for helper files to avoid Python's module cache. The wheel approach is cleaner for production.&lt;/P&gt;
&lt;P&gt;Hope this helps.&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Apr 2026 11:04:19 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/supporting-file-unrecognition-in-dlt-pipeline/m-p/155443#M54247</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-04-24T11:04:19Z</dc:date>
    </item>
  </channel>
</rss>

