I am trying to read a .parqest file from a ADLS gen2 location in azure databricks . But facing the below error:
spark.read.parquet("abfss://............/..._2023-01-14T08:01:29.8549884Z.parquet")
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (10.139.64.6 executor 0): org.apache.spark.SparkException: Exception thrown in awaitResult:
I searched in google ( as per suggestion in some posts tried to set spark.driver.maxResultSize to 20g , some blogs says to put inferSchema option ) but getting the same error again and again . The file size I am trying to read is 12kb .
I tried with below runtime versions in my databricks cluster
11.3 LTS (includes Apache Spark 3.3.0, Scala 2.12)
11.1 (includes Apache Spark 3.3.0, Scala 2.12)
10.4 LTS (includes Apache Spark 3.2.1, Scala 2.12)
Can anyone please advise how to overcome this issue ?