cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Getting Errors when reading data from Excel InternalError: pip is not installed for /local_disk

Brianben
New Contributor III

Hi all,

We have a daily Databricks job that downloads excel files from SharePoint and read them, the job works fine until today (3March). We are getting the following error message when running the code to read the excel file:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 4984.0 failed 4 times, most recent failure: Lost task 1.3 in stage 4984.0 (TID 210941, 10.249.215.10, executor 2): org.apache.spark.SparkException: InternalError: pip is not installed for /local_disk0/spark-5c862e06-01f9-45b9-9e19-e3b66da55ba5/executor-e8eee9ca-9b55-452a-a841-338ce12461be/pythonVirtualEnvDirs/virtualEnv-1cf2ae47-3738-434e-9355-02a97960ebde

We have two code block that run in sequence:

dbutils.library.installPyPI("Office365-REST-Python-Client",version="2.4.4")

#########################
some code to download excel from sharepoint
########################
sparkDF = spark.read.format("com.crealytics.spark.excel").option("header", "true").option("inferSchema", "true").load(file_name)

We get error when running the second code block. I tried to comment out the installPyPI code line and the error is gone. I think the error is related to the install library action, but don't know why it didn't fail when doing it but after it.

Could someone clarify for us? Thanks in advance.

1 REPLY 1

Renu_
New Contributor III

I think the issue comes from installing Office365-REST-Python-Client using dbutils.library.installPyPI, which seems to create a conflicting Python environment for Spark executors. Since notebook specific installs modify the environment dynamically, the executors and driver end up out of sync, leading to errors. A better approach is to install the library at the cluster level using the Databricks UI or an init script, so everything runs in a stable, shared environment.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now