when I try to read parquet file from Azure datalake container from databricks, I am getting spark exception. Below is my query
import pyarrow.parquet as pq
from pyspark.sql.functions import *
from datetime import datetime
data = spark.read.parquet(f"/mnt/data/country/abb/countrydata.parquet")
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 4 times, most recent failure: Lost task 0.3 in stage 14.0 (TID 35) (10.135.39.71 executor 0): org.apache.spark.SparkException: Exception thrown in awaitResult:
what does this mean? What I need to do for this?