<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Intermittent failure with Python IMPORTS statements after upgrading to DBR18.0 in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/151452#M53649</link>
    <description>&lt;P&gt;Thanks for the suggestion Fabricio.&amp;nbsp; We tried your suggestion of using sys.path.insert and it didn't improve the reliability.&amp;nbsp; We found that converting some of the modules into notebooks improved reliability a lot. But other python modules we couldn't convert to notebooks because they were used in python udfs and we ran into pickle issues. Also, 18.1 is out of beta and it seemed slightly better than 18.0.&amp;nbsp;&lt;/P&gt;&lt;P&gt;So overall our job now crashes 1-3 times per day with 80% of our python modules converted to notebooks and any remaining modules use imports and have a sys.path.insert block before them.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 19 Mar 2026 18:36:51 GMT</pubDate>
    <dc:creator>thackman</dc:creator>
    <dc:date>2026-03-19T18:36:51Z</dc:date>
    <item>
      <title>Intermittent failure with Python IMPORTS statements after upgrading to DBR18.0</title>
      <link>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/150708#M53500</link>
      <description>&lt;P&gt;We have a python module (WidgetUtil.py) that sits in the same folder as our notebook. For the past few years we have been using a simple import statement to use it. Starting with DBR18.0 the imports fails intermittently (25% of the time) when running from job compute in PROD. It is 100% reliable when I use a personal/dedicated compute cluster in DEV. We rolled back to DBR17.2 and the failures went away. Then we rolled forward to DBR18.1Beta and the job started failing again.&amp;nbsp; FYI: This is on Azure.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="imports.png" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24756iDDF285E62FCE580D/image-size/medium?v=v2&amp;amp;px=400" role="button" title="imports.png" alt="imports.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image (1).png" style="width: 678px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24755iD2EE054774B59FBA/image-dimensions/678x134?v=v2" width="678" height="134" role="button" title="image (1).png" alt="image (1).png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I did some debugging with AI suggestions, the theory was that FUSE was slow to mount. In the end that wasn't the case. We added a gatekeeper notebook at the start of the job. It monitored the paths and waited for the FUSE mount to complete. What we found was that the directory was always immediately available and we could either read the file immediately or it was never readable. Waiting up to two minutes never fixed the issue.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="TestCode.jpg" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24757i35C7F96A395E65CD/image-size/medium?v=v2&amp;amp;px=400" role="button" title="TestCode.jpg" alt="TestCode.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;A job that succeeded.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="WorkingRun.jpg" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24758iB387C29F015CE7B2/image-size/medium?v=v2&amp;amp;px=400" role="button" title="WorkingRun.jpg" alt="WorkingRun.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;A job that failed&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="FailedRun.jpg" style="width: 400px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24759iC1C8E8F0D642BB86/image-size/medium?v=v2&amp;amp;px=400" role="button" title="FailedRun.jpg" alt="FailedRun.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Why is importing a .py file unreliable now?&amp;nbsp;&lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 12 Mar 2026 14:35:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/150708#M53500</guid>
      <dc:creator>thackman</dc:creator>
      <dc:date>2026-03-12T14:35:03Z</dc:date>
    </item>
    <item>
      <title>Re: Intermittent failure with Python IMPORTS statements after upgrading to DBR18.0</title>
      <link>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/150716#M53503</link>
      <description>&lt;P&gt;The issue is caused by changes in Databricks Runtime 18.x that make importing a plain.py file from the notebook’s folder unreliable on job compute, even though the same pattern still works consistently on a personal DEV cluster. In 18.x, the folder that contains your notebook (and WidgetUtil.py) is no longer consistently added to Python’s sys. path for jobs, so import WidgetUtils sometimes works and sometimes fails, even though the file is present and readable.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;Explicitly add the module folder to sys. path&lt;/P&gt;
&lt;P&gt;Use this when you want minimal structural changes and a fast fix.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;import os import sys&lt;/P&gt;
&lt;P&gt;module_dir = "/Workspace/Shared/prod_utils" i&lt;/P&gt;
&lt;P&gt;f module_dir not in sys.path:&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp;sys.path.insert(0, module_dir)&lt;/P&gt;
&lt;P&gt;import WidgetUtil&lt;/P&gt;</description>
      <pubDate>Thu, 12 Mar 2026 16:09:01 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/150716#M53503</guid>
      <dc:creator>Fabricio_Mattos</dc:creator>
      <dc:date>2026-03-12T16:09:01Z</dc:date>
    </item>
    <item>
      <title>Re: Intermittent failure with Python IMPORTS statements after upgrading to DBR18.0</title>
      <link>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/151452#M53649</link>
      <description>&lt;P&gt;Thanks for the suggestion Fabricio.&amp;nbsp; We tried your suggestion of using sys.path.insert and it didn't improve the reliability.&amp;nbsp; We found that converting some of the modules into notebooks improved reliability a lot. But other python modules we couldn't convert to notebooks because they were used in python udfs and we ran into pickle issues. Also, 18.1 is out of beta and it seemed slightly better than 18.0.&amp;nbsp;&lt;/P&gt;&lt;P&gt;So overall our job now crashes 1-3 times per day with 80% of our python modules converted to notebooks and any remaining modules use imports and have a sys.path.insert block before them.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 19 Mar 2026 18:36:51 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/151452#M53649</guid>
      <dc:creator>thackman</dc:creator>
      <dc:date>2026-03-19T18:36:51Z</dc:date>
    </item>
    <item>
      <title>Re: Intermittent failure with Python IMPORTS statements after upgrading to DBR18.0</title>
      <link>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/152064#M53751</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;This is a known issue with the WSFS FUSE layer in DBR 18.x — a fix has been developed but may not&lt;BR /&gt;be fully rolled out yet. The most reliable workaround is to package your .py modules as a wheel &lt;BR /&gt;and install via %pip install, which bypasses FUSE entirely. If you need to stay on 18.x, raise a &lt;BR /&gt;support ticket referencing this behavior for engineering to check your region's patch status.&lt;/P&gt;
&lt;P&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Emma&lt;/P&gt;</description>
      <pubDate>Wed, 25 Mar 2026 17:57:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/intermittent-failure-with-python-imports-statements-after/m-p/152064#M53751</guid>
      <dc:creator>emma_s</dc:creator>
      <dc:date>2026-03-25T17:57:50Z</dc:date>
    </item>
  </channel>
</rss>

