When connecting to aws s3 bucket using dbfs, application throws error like
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7864387.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7864387.0 (TID 17097322) (xx.***.xx.x executor 853): com.databricks.sql.io.FileReadException: Error while reading file
application is importing csv files from aws s3 and it was working for few days. i tried to load the very small file but same issue. even tried to previously imported file and same issue. When i ran the below command and it works that means mounting is active and listing the files in directory:
display(dbutils.fs.ls("/mnt/xxxxx/yyyy"))
sample code snippet:
spark.read.format("csv").option("inferSchema", "true").option("header", "true").option("sep", ",").load(file_location)