cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

com.databricks.sql.io.FileReadException Caused by: com.microsoft.azure.storage.StorageException: Blob hash mismatch

mayuri18kadam
New Contributor II

Hi, I am getting the following error:

com.databricks.sql.io.FileReadException: Error while reading file wasbs:REDACTED_LOCAL_PART@blobStorageName.blob.core.windows.net/cook/processYear=2021/processMonth=12/processDay=30/processHour=18/part-00003-tid-4178615623264760328.c000.avro.
Caused by: com.microsoft.azure.storage.StorageException: Blob hash mismatch (integrity check failed), Expected value is 8P7bo1mnLPoLxVw==, retrieved bu+CiCkLm/kc6QA==.

where processYear, processMonth, processDay and processHour are partition columns.

however, this is actually just a WARN, and the code still proceeds to execute(also I am able to read this file separately in notebook)... but eventually the job dies due to:

WARN Lost task 9026.0 in stage 324.0 (TID 1525596, 10.139.64.16, executor 83): TaskKilled (Stage cancelled)

I am using the following databricks and spark configs:

RuntimeVersion: 5.5.x-scala2.11
 
MasterConfiguration:
 
    NodeType: Standard_D32s_v3
 
    NumberOfNodes: 1
 
WorkerConfiguration:
 
    NodeType: Standard_D32s_v3
 
    NumberOfNodes: 2

This same job is deployed in several other environments too with much more data volume, and it does not fail there. Any idea why it may fail here?

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

Hi @mayuri18kadam@gmail.com​ ,

This could be a limitation from Spark submit job. Please check the docs from here https://docs.databricks.com/jobs.html#create-a-job please look for the following information:

Important

There are several limitations for spark-submit tasks....

View solution in original post

3 REPLIES 3

-werners-
Esteemed Contributor III

can you try to run your code but without the file you get an exception on?

mayuri18kadam
New Contributor II

yes, I can read from notebook with DBR 6.4, when I specify this path:

wasbs:REDACTED_LOCAL_PART@blobStorageName.blob.core.windows.net/cook/processYear=2021/processMonth=12/processDay=30/processHour=18

but the same using DBR 6.4 from spark-submit, it fails again.. each time complaining of different part files under different partitions.

Also we have the exact same code, with exact same spark configs deployed in several different regions but this is only where we have an issue. Could this be data related, like some part file size limitations for the given spark version?

Hi @mayuri18kadam@gmail.com​ ,

This could be a limitation from Spark submit job. Please check the docs from here https://docs.databricks.com/jobs.html#create-a-job please look for the following information:

Important

There are several limitations for spark-submit tasks....

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group