Getting Errors when reading data from Excel InternalError: pip is not installed for /local_disk
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2025 01:16 AM
Hi all,
We have a daily Databricks job that downloads excel files from SharePoint and read them, the job works fine until today (3March). We are getting the following error message when running the code to read the excel file:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 4984.0 failed 4 times, most recent failure: Lost task 1.3 in stage 4984.0 (TID 210941, 10.249.215.10, executor 2): org.apache.spark.SparkException: InternalError: pip is not installed for /local_disk0/spark-5c862e06-01f9-45b9-9e19-e3b66da55ba5/executor-e8eee9ca-9b55-452a-a841-338ce12461be/pythonVirtualEnvDirs/virtualEnv-1cf2ae47-3738-434e-9355-02a97960ebde
We have two code block that run in sequence:
dbutils.library.installPyPI("Office365-REST-Python-Client",version="2.4.4")
#########################
some code to download excel from sharepoint
########################
sparkDF = spark.read.format("com.crealytics.spark.excel").option("header", "true").option("inferSchema", "true").load(file_name)
We get error when running the second code block. I tried to comment out the installPyPI code line and the error is gone. I think the error is related to the install library action, but don't know why it didn't fail when doing it but after it.
Could someone clarify for us? Thanks in advance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2025 10:07 AM
I think the issue comes from installing Office365-REST-Python-Client using dbutils.library.installPyPI, which seems to create a conflicting Python environment for Spark executors. Since notebook specific installs modify the environment dynamically, the executors and driver end up out of sync, leading to errors. A better approach is to install the library at the cluster level using the Databricks UI or an init script, so everything runs in a stable, shared environment.

