Hi all,
We have a daily Databricks job that downloads excel files from SharePoint and read them, the job works fine until today (3March). We are getting the following error message when running the code to read the excel file:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 4984.0 failed 4 times, most recent failure: Lost task 1.3 in stage 4984.0 (TID 210941, 10.249.215.10, executor 2): org.apache.spark.SparkException: InternalError: pip is not installed for /local_disk0/spark-5c862e06-01f9-45b9-9e19-e3b66da55ba5/executor-e8eee9ca-9b55-452a-a841-338ce12461be/pythonVirtualEnvDirs/virtualEnv-1cf2ae47-3738-434e-9355-02a97960ebde
We have two code block that run in sequence:
dbutils.library.installPyPI("Office365-REST-Python-Client",version="2.4.4")
#########################
some code to download excel from sharepoint
########################
sparkDF = spark.read.format("com.crealytics.spark.excel").option("header", "true").option("inferSchema", "true").load(file_name)
We get error when running the second code block. I tried to comment out the installPyPI code line and the error is gone. I think the error is related to the install library action, but don't know why it didn't fail when doing it but after it.
Could someone clarify for us? Thanks in advance.