org.apache.spark.SparkException: Job aborted due to stage failure:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-04-2024 09:59 AM
Hi
I have around 20 million records in my DF, and want to save it in HORIZINTAL SQL DB.
This is error:
org.apache.spark.SparkException: Job aborted due to stage failure: A shuffle map stage with indeterminate output was failed and retried. However, Spark cannot rollback the ResultStage 1525 to re-process the input data, and has to fail this job. Please eliminate the indeterminacy by checkpointing the RDD before repartition and try again.
Here is my code:
df.write.format("jdbc").options( **DB_PROPS, **extra_options, dbtable=table, truncate=truncate).mode(mode).save()
Any opinion what can go wrong?
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-07-2024 02:53 AM
@Manmohan_Nayak If the resolution worked for you?
I am facing the same error from last couple of days for the job which was working earlier
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2024 01:56 AM
facing same issue since we moved from Spark 3.2.1 (databricks 10.4) to Spark 3.3.2 (databricks 12.2), how come we have seen this problem before, now we do.. is it Spark related or Databricks related (autoscaling?)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-04-2024 05:49 AM
If there are any failures which may lead to a stage retry, but retrying the stage translates into potentially having an inconsistent result (indeterminacy) then this exception is raised. The exception is raised in newer version where the validation is performed, likely unavailable in DBR 10.4 and older versions.
To address the problem, you may as per the error message, checkpoint the DF before the indeterminacy is introduce.
This can be commonly seen in scenarios where there are nodes lost, for example due to spot instance termination, or similar events, not fully sure about a scaling down event, but could also be another reason.

