<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic PyPI cluster libraries failing to get installed in Administration &amp; Architecture</title>
    <link>https://community.databricks.com/t5/administration-architecture/pypi-cluster-libraries-failing-to-get-installed/m-p/62340#M937</link>
    <description>&lt;P&gt;Hi all,&lt;BR /&gt;&lt;BR /&gt;In my cluster, some of the PyPI cluster libraries started failing to get installed. It is weird because some of them get installed and some of are constantly failing. In every failed one, the error message is the same (just a package name is different):&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;"&lt;SPAN&gt;Library installation attempted on the driver node of cluster XXX and failed. Please refer to the following error message to fix the library or contact Databricks support. Error Code: DRIVER_LIBRARY_INSTALLATION_FAILURE. Error Message: org.apache.spark.SparkException: Process List(/bin/su, libraries, -c, bash /local_disk0/.ephemeral_nfs/cluster_libraries/python/python_start_clusterwide.sh /local_disk0/.ephemeral_nfs/cluster_libraries/python/bin/pip install 'jaydebeapi' --disable-pip-version-check) exited with code 1. WARNING: The directory '/home/libraries/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.&lt;/SPAN&gt;"&lt;/P&gt;&lt;P&gt;Recently, I turned on Table Access Control for Hive metastore, which required switching cluster access mode from "No isolation shared" to "Shared". However, even after switching back to "No isolation shared", the above problem still persists.&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;I can't see what could possibly cause this problem, and I can't find any solution. Any tips/advices/etc. are helpful.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 29 Feb 2024 11:09:35 GMT</pubDate>
    <dc:creator>unauthenticated</dc:creator>
    <dc:date>2024-02-29T11:09:35Z</dc:date>
    <item>
      <title>PyPI cluster libraries failing to get installed</title>
      <link>https://community.databricks.com/t5/administration-architecture/pypi-cluster-libraries-failing-to-get-installed/m-p/62340#M937</link>
      <description>&lt;P&gt;Hi all,&lt;BR /&gt;&lt;BR /&gt;In my cluster, some of the PyPI cluster libraries started failing to get installed. It is weird because some of them get installed and some of are constantly failing. In every failed one, the error message is the same (just a package name is different):&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;"&lt;SPAN&gt;Library installation attempted on the driver node of cluster XXX and failed. Please refer to the following error message to fix the library or contact Databricks support. Error Code: DRIVER_LIBRARY_INSTALLATION_FAILURE. Error Message: org.apache.spark.SparkException: Process List(/bin/su, libraries, -c, bash /local_disk0/.ephemeral_nfs/cluster_libraries/python/python_start_clusterwide.sh /local_disk0/.ephemeral_nfs/cluster_libraries/python/bin/pip install 'jaydebeapi' --disable-pip-version-check) exited with code 1. WARNING: The directory '/home/libraries/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag.&lt;/SPAN&gt;"&lt;/P&gt;&lt;P&gt;Recently, I turned on Table Access Control for Hive metastore, which required switching cluster access mode from "No isolation shared" to "Shared". However, even after switching back to "No isolation shared", the above problem still persists.&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;I can't see what could possibly cause this problem, and I can't find any solution. Any tips/advices/etc. are helpful.&lt;BR /&gt;&lt;BR /&gt;Thanks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 29 Feb 2024 11:09:35 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/pypi-cluster-libraries-failing-to-get-installed/m-p/62340#M937</guid>
      <dc:creator>unauthenticated</dc:creator>
      <dc:date>2024-02-29T11:09:35Z</dc:date>
    </item>
    <item>
      <title>Re: PyPI cluster libraries failing to get installed</title>
      <link>https://community.databricks.com/t5/administration-architecture/pypi-cluster-libraries-failing-to-get-installed/m-p/63559#M968</link>
      <description>&lt;P&gt;I've had this issue myself. What ended up to be the problem is I had windows line endings in my .sh script. You need to convert them to Linux line endings.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 13 Mar 2024 14:11:26 GMT</pubDate>
      <guid>https://community.databricks.com/t5/administration-architecture/pypi-cluster-libraries-failing-to-get-installed/m-p/63559#M968</guid>
      <dc:creator>jacovangelder</dc:creator>
      <dc:date>2024-03-13T14:11:26Z</dc:date>
    </item>
  </channel>
</rss>

