- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2021 10:45 AM
I have generated a result using SQL. But whenever I try to download the full result (1 million rows), it is throwing SparkException. I can download the preview result but not the full result. Why ? What happens under the hood when I try to download the full result ?
Here is the exception:
SparkException: Job aborted due to stage failure: Task 0 in stage 133.0 failed 4 times, most recent failure: Lost task 0.3 in stage 133.0 (TID 2644) (192.***.x.x executor 6): com.databricks.sql.io.FileReadException: Error while reading file abfss:REDACTED_LOCAL_PART@someurl. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Caused by: FileReadException: Error while reading file abfss:REDACTED_LOCAL_PART@someurl. It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If Delta cache is stale or the underlying files have been removed, you can invalidate Delta cache manually by restarting the cluster.
Caused by: FileNotFoundException: Operation failed: "The specified path does not exist.", 404, HEAD, https://***.snappy.parquet?upn=false&action=getStatus&timeout=90
Caused by: AbfsRestOperationException: Operation failed: "The specified path does not exist.", 404, HEAD, https://***.snappy.parquet?upn=false&action=getStatus&timeout=90
- Labels:
-
Delta
-
Download
-
Job
-
Result
-
Stage failure
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2021 07:33 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-15-2021 09:20 AM
@Md Tahseen Anam - Hello! My name is Piper and I'm one of the community moderators. Thanks for your question. Let's give it a bit longer to see what the community has to say. Hang in there!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2021 12:22 AM
Hi, thank you for your reply. Would be great to get some lights in here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-26-2021 09:00 PM
Hi @Md Tahseen Anam are there any updates happening to the table while you are downloading the results?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-28-2021 12:52 AM
No update. can it be a network issue ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2021 01:06 PM
hi @Md Tahseen Anam ,
Have you try the following steps to re-run your query and get the full results? docs here
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2021 07:33 AM
It's working now, I think it was a network issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2021 08:06 AM
@Md Tahseen Anam - Thanks for letting us know. I'm glad things are working!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2022 01:50 AM
I am also having this issue again and again. I really want to understand what can we do to avoid this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2024 01:56 PM - edited 12-13-2024 02:01 PM
Job aborted due to stage failure: Task 6506 in stage 46.0 failed 4 times, most recent failure: Lost task 6506.3 in stage 46.0 (TID 12896) (10.**.***.*** executor 12): java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 5062249863, limit: 5065146368)
I am facing this issue when i run my code in databricks notebook on serverless compute. the code is reading data from table (700 million) and ingesting rows to api in batches, after getting response from api, failed batched i am storing into other table, after ingestion 250 million records i am getiing this error.

