cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Batch Doesn't Exist Failure

JordanYaker
Contributor

I have a job that's been working perfectly fine since I deployed it earlier this month. Last night, however, one of the tasks within the job started failing with the following error:

java.lang.IllegalStateException: batch 4 doesn't exist
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$validateOffsetLogAndGetPrevOffset$1(MicroBatchExecution.scala:426)
	at scala.Option.getOrElse(Option.scala:189)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.validateOffsetLogAndGetPrevOffset(MicroBatchExecution.scala:419)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:466)
	at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.populateStartOffsetsWithRollbackHandling(MultiBatchRollbackSupport.scala:112)
	at org.apache.spark.sql.execution.streaming.MultiBatchRollbackSupport.populateStartOffsetsWithRollbackHandling$(MultiBatchRollbackSupport.scala:79)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsetsWithRollbackHandling(MicroBatchExecution.scala:57)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStreamWithListener$2(MicroBatchExecution.scala:339)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:336)
	at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:334)
	at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:77)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStreamWithListener$1(MicroBatchExecution.scala:329)
	at org.apache.spark.sql.execution.streaming.SingleBatchExecutor.execute(TriggerExecutor.scala:39)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStreamWithListener(MicroBatchExecution.scala:319)
	at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:307)
	at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:368)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:985)
	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:332)
	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.$anonfun$run$2(StreamExecution.scala:257)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at com.databricks.unity.EmptyHandle$.runWithAndClose(UCSHandle.scala:125)
	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:257)

I can't find anything in the documentation or the community posts so far. Anyone have an idea?

2 REPLIES 2

JordanYaker
Contributor

I tried FSCK REPAIR just on the chance that it would work and it had no effect.

Kaniz
Community Manager
Community Manager

Hi @JordanYakerThe error message java.lang.IllegalStateException: batch 4 doesn't exist is thrown when Apache Spark™’s Structured Streaming job tries to access a batch that doesn’t exist in the metadata. This can happen for various reasons, such as an issue with the checkpoint directory or a problem with the source data.

One possible solution mentioned on StackOverflow is to manually copy the missing metadata files from the old output path to the new output path. However, this might not be feasible if you don’t have access to the old output path or if there are too many missing batches.

Another potential cause could be a change in the output directory of the FileSink while keeping the ...locations. If this is the case, you might want to ensure that the output directory remains consistent across job restarts.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.