<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Custom docker container for GPU compute using python 3.12 in Machine Learning</title>
    <link>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/132140#M4317</link>
    <description>&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Greetings &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/184204"&gt;@knocheeri&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;After doing some research, it looks like there is currently no official support for Python 3.12 (classic compute clusters) in custom GPU containers. At the moment, the highest officially supported version on GPU runtimes is Python 3.10.&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;To be clear, I am referring to classic clusters where you are allowed to install libraries, not serverless.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;The GitHub example you referenced only provides native support for Python 3.10. While I’ve come across anecdotal reports of people attempting to force Python 3.12 into GPU containers, these efforts typically fail due to incompatibilities between driver/worker processes, Python path mismatches, and broader runtime environment conflicts.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;Additionally, Databricks has not published any documented or officially tested upgrade path for moving GPU custom containers to Python 3.12. This means that even if you managed to build a custom Docker image with Python 3.12, you’d likely hit instability issues when integrating with the Databricks runtime (CUDA drivers, Spark executors, ML libraries, and other tightly coupled dependencies).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;I hope this provides more context around the limitations you’re running into.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;Cheers, Louis.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 16 Sep 2025 16:53:16 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2025-09-16T16:53:16Z</dc:date>
    <item>
      <title>Custom docker container for GPU compute using python 3.12</title>
      <link>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/131701#M4301</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hello!&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I have a GPU compute using my own custom docker container. I am trying to upgrade from python 3.10 to 3.12 since 3.10 EOL is next year, but I cannot find any official documentation around this for Databricks runtime.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;My current working solution uses databricksruntime/gpu-base:cuda11.8 (based on this official example: &lt;A href="https://github.com/databricks/containers/blob/master/ubuntu/gpu/cuda-11.8/venv/Dockerfile" target="_blank" rel="noopener"&gt;https://github.com/databricks/containers/blob/master/ubuntu/gpu/cuda-11.8/venv/Dockerfile&lt;/A&gt;&lt;/SPAN&gt;&lt;SPAN&gt;) which supports python 3.10, but unfortunately it does not natively support python 3.12. I installed python 3.12 from source still using databricksruntime/gpu-base:cuda11.8, but can't seem to get it to work when running in Databricks with runtime 16.4 LTS and g4dn.xlarge [T4].&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Has anyone come up with workaround or an alternative solution to this?&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 11 Sep 2025 19:18:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/131701#M4301</guid>
      <dc:creator>knocheeri</dc:creator>
      <dc:date>2025-09-11T19:18:05Z</dc:date>
    </item>
    <item>
      <title>Re: Custom docker container for GPU compute using python 3.12</title>
      <link>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/132140#M4317</link>
      <description>&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;Greetings &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/184204"&gt;@knocheeri&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;After doing some research, it looks like there is currently no official support for Python 3.12 (classic compute clusters) in custom GPU containers. At the moment, the highest officially supported version on GPU runtimes is Python 3.10.&lt;SPAN class="Apple-converted-space"&gt;&amp;nbsp; &lt;/SPAN&gt;To be clear, I am referring to classic clusters where you are allowed to install libraries, not serverless.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;The GitHub example you referenced only provides native support for Python 3.10. While I’ve come across anecdotal reports of people attempting to force Python 3.12 into GPU containers, these efforts typically fail due to incompatibilities between driver/worker processes, Python path mismatches, and broader runtime environment conflicts.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;Additionally, Databricks has not published any documented or officially tested upgrade path for moving GPU custom containers to Python 3.12. This means that even if you managed to build a custom Docker image with Python 3.12, you’d likely hit instability issues when integrating with the Databricks runtime (CUDA drivers, Spark executors, ML libraries, and other tightly coupled dependencies).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;I hope this provides more context around the limitations you’re running into.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;SPAN class="s1"&gt;&lt;BR /&gt;Cheers, Louis.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Sep 2025 16:53:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/132140#M4317</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2025-09-16T16:53:16Z</dc:date>
    </item>
    <item>
      <title>Re: Custom docker container for GPU compute using python 3.12</title>
      <link>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/132164#M4318</link>
      <description>&lt;P&gt;Yeah, I have been down the rabbit hole of trying to force 3.12 into GPU containers to no avail. With 3.10 end of life next year, I hope that Databricks is working on an official solution.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 16 Sep 2025 19:46:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/machine-learning/custom-docker-container-for-gpu-compute-using-python-3-12/m-p/132164#M4318</guid>
      <dc:creator>knocheeri</dc:creator>
      <dc:date>2025-09-16T19:46:18Z</dc:date>
    </item>
  </channel>
</rss>

