cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

spark exception error while reading a parquet file

shamly
New Contributor III

when I try to read parquet file from Azure datalake container from databricks, I am getting spark exception. Below is my query

import pyarrow.parquet as pq

from pyspark.sql.functions import *

from datetime import datetime

data = spark.read.parquet(f"/mnt/data/country/abb/countrydata.parquet")

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 4 times, most recent failure: Lost task 0.3 in stage 14.0 (TID 35) (10.135.39.71 executor 0): org.apache.spark.SparkException: Exception thrown in awaitResult:

what does this mean? What I need to do for this?

2 REPLIES 2

Debayan
Databricks Employee
Databricks Employee

Hi @shamly ptโ€‹ , Could you please post the full error stack here?

DavideAnghileri
Contributor

Hi @shamly ptโ€‹ , more info are needed to solve the issue. However common problems are:

  • The storage is not mount
  • That file doesn't exists in the mounted storage

Also, there is no need to use an f-string if there are no curly brackets with expressions in the string, so you can remove the f in `f"/mnt/data/country/abb/countrydata.parquet"`

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group