10-14-2021 10:45 AM
I have generated a result using SQL. But whenever I try to download the full result (1 million rows), it is throwing SparkException. I can download the preview result but not the full result. Why ? What happens under the hood when I try to download the full result ?
Here is the exception:
SparkException: Job aborted due to stage failure: Task 0 in stage 133.0 failed 4 times, most recent failure: Lost task 0.3 in stage 133.0 (TID 2644) (192.***.x.x executor 6): com.databricks.sql.io.FileReadException: Error while reading file abfss:REDACTED_LOCAL_PART@someurl. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Caused by: FileReadException: Error while reading file abfss:REDACTED_LOCAL_PART@someurl. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Caused by: FileNotFoundException: Operation failed: "The specified path does not exist.", 404, HEAD, https://***.snappy.parquet?upn=false&action=getStatus&timeout=90
Caused by: AbfsRestOperationException: Operation failed: "The specified path does not exist.", 404, HEAD, https://***.snappy.parquet?upn=false&action=getStatus&timeout=90
11-09-2021 07:33 AM
10-15-2021 09:20 AM
@Md Tahseen Anam - Hello! My name is Piper and I'm one of the community moderators. Thanks for your question. Let's give it a bit longer to see what the community has to say. Hang in there!
10-19-2021 12:22 AM
Hi, thank you for your reply. Would be great to get some lights in here.
10-26-2021 09:00 PM
Hi @Md Tahseen Anam are there any updates happening to the table while you are downloading the results?
10-28-2021 12:52 AM
No update. can it be a network issue ?
11-08-2021 01:06 PM
hi @Md Tahseen Anam ,
Have you try the following steps to re-run your query and get the full results? docs here
11-09-2021 07:33 AM
It's working now, I think it was a network issue.
11-09-2021 08:06 AM
@Md Tahseen Anam - Thanks for letting us know. I'm glad things are working!
06-20-2022 01:50 AM
I am also having this issue again and again. I really want to understand what can we do to avoid this?
a week ago - last edited a week ago
Job aborted due to stage failure: Task 6506 in stage 46.0 failed 4 times, most recent failure: Lost task 6506.3 in stage 46.0 (TID 12896) (10.**.***.*** executor 12): java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 5062249863, limit: 5065146368)
I am facing this issue when i run my code in databricks notebook on serverless compute. the code is reading data from table (700 million) and ingesting rows to api in batches, after getting response from api, failed batched i am storing into other table, after ingestion 250 million records i am getiing this error.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group