I have a scheduled job (running in continuous mode) with the following code
```
(
spark
.readStream
.option("checkpointLocation", databricks_checkpoint_location)
.option("readChangeFeed", "true")
.option("startingVersion", VERSION + 1)
.table(databricks_source_table_raw_postgres_nft)
.writeStream
.foreachBatch(process_batch)
.outputMode("append")
.start()
)
```
I set the `VERSION` to a number when I initial the job. However, I found that when I restart the job, the job starts at the same `VERSION` instead of checkpoint. It looks like the checkpoint is not being used.
Is the checkpoint working with change data feed? If not, how can I ensure the job start at where it stopped, in case the job failed?
I would like to let the `continuous` schedule to restart the workflow immediately after failure, instead of restart with starting version set manually.
Thanks