07-05-2022 06:36 AM
I tried to read a file from S3, but facing the below error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 53.0 failed 4 times, most recent failure: Lost task 0.3 in stage 53.0 (TID 82, xx.xx.xx.xx, executor 0): com.databricks.sql.io.FileReadException: Error while reading file s3://<mybucket>/<path>/file.csv.
I used:
spark.read.options(delimiter = '|').option("header", False).csv('s3://<mybucket>/<path>/file.csv')
07-06-2022 03:57 AM
I remember I have seen such an issue before. Please check the S3 life cycle management. If the object is migrated to another storage class (mostly archived) then there are possibilities for this error.
07-05-2022 08:05 AM
Have you validated if the file exists or not?
Is this happening with all files are specific files?
07-06-2022 03:49 AM
The file exists. Few files are not working and few are working.
07-06-2022 03:57 AM
I remember I have seen such an issue before. Please check the S3 life cycle management. If the object is migrated to another storage class (mostly archived) then there are possibilities for this error.
07-06-2022 04:29 AM
Thanks @Prabakar Ammeappin for this information. We have life cycle management set. The files with this error were not used for some time and were Archived. I wonder why files were moved to Glacier in 60 days. Have to revisit the lifecycle rules and change it.
07-06-2022 04:31 AM
Now it makes sense why I got the error for some files and not for the others.
07-05-2022 12:33 PM
Which DBR version are you using? Could you please test it with a different DBR version probably DBR 9.x?
07-06-2022 03:49 AM
I tried with all the available LTS versions.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group