<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: checkpoint changes not working on my databricks job in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143647#M52210</link>
    <description>&lt;P&gt;As I understand, you want to ingest "only&amp;nbsp;&lt;SPAN&gt;new records coming through Kafka via old checkpoint path".&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;I would suggest that you can change&amp;nbsp;&lt;SPAN&gt;.option("startingOffsets", "earliest")&amp;nbsp;to&amp;nbsp;.option("startingOffsets", "latest"). It would start&amp;nbsp;reading from the newest offset and also avoid reprocessing of your old data. However, you would need to delete the old checkpoint path and start your stream after making the change.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;P.S. - Don't try the above directly for production load.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Sun, 11 Jan 2026 22:20:17 GMT</pubDate>
    <dc:creator>nikhil_2212</dc:creator>
    <dc:date>2026-01-11T22:20:17Z</dc:date>
    <item>
      <title>checkpoint changes not working on my databricks job</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143643#M52209</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I do have a job processing kafka stream using kafka.readstream process and due to some issue we changed the checkpoint path to other path and it pulled all the records and later when i changed to the original checkpoint location it is not pulling the new records coming through kafka. Any clue what can be done to make it running again the way it was. Here is the code snippet.&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;curr_env &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;getenv&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;config_path &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;"../configs/CDLZ_MEMBERS_Custom_Tables.yaml"&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;process_name &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;"CDLZ_MEMBERS_CUSTOM_Tables"&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;config &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;load_yaml_config&lt;/SPAN&gt;&lt;SPAN&gt;(config_path)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;source_kafka_topic &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; config[process_name][&lt;/SPAN&gt;&lt;SPAN&gt;"KAFKA_CONFIG"&lt;/SPAN&gt;&lt;SPAN&gt;][&lt;/SPAN&gt;&lt;SPAN&gt;"input_kafka_topic"&lt;/SPAN&gt;&lt;SPAN&gt;]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;raw_table_name &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; config[process_name][&lt;/SPAN&gt;&lt;SPAN&gt;"DELTA_TABLE_CONFIG"&lt;/SPAN&gt;&lt;SPAN&gt;][&lt;/SPAN&gt;&lt;SPAN&gt;"raw_delta_table"&lt;/SPAN&gt;&lt;SPAN&gt;]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;tables_to_read &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; config[process_name][&lt;/SPAN&gt;&lt;SPAN&gt;"KAFKA_CONFIG"&lt;/SPAN&gt;&lt;SPAN&gt;][&lt;/SPAN&gt;&lt;SPAN&gt;"tables_to_read"&lt;/SPAN&gt;&lt;SPAN&gt;]&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;raw_table_name &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; raw_table_name.&lt;/SPAN&gt;&lt;SPAN&gt;replace&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;{env}&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, curr_env)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;checkpoint &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;get_checkpoint_path&lt;/SPAN&gt;&lt;SPAN&gt;(source_kafka_topic &lt;/SPAN&gt;&lt;SPAN&gt;+&lt;/SPAN&gt; &lt;SPAN&gt;"/"&lt;/SPAN&gt; &lt;SPAN&gt;+&lt;/SPAN&gt;&lt;SPAN&gt; &amp;nbsp;process_name)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;kafka_stream &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; (&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; spark.readStream.&lt;/SPAN&gt;&lt;SPAN&gt;format&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"kafka"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"kafka.bootstrap.servers"&lt;/SPAN&gt;&lt;SPAN&gt;, kafka_bootstrap_servers)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"kafka.security.protocol"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"SASL_SSL"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;"kafka.sasl.jaas.config"&lt;/SPAN&gt;&lt;SPAN&gt;,&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &lt;/SPAN&gt;&lt;SPAN&gt;"kafkashaded.org.apache.kafka.common.security.plain.PlainLoginModule required username='&lt;/SPAN&gt;&lt;SPAN&gt;{}&lt;/SPAN&gt;&lt;SPAN&gt;' password='&lt;/SPAN&gt;&lt;SPAN&gt;{}&lt;/SPAN&gt;&lt;SPAN&gt;';"&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;format&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; username, password&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; ),&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; )&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"kafka.ssl.endpoint.identification.algorithm"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"https"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"kafka.sasl.mechanism"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"PLAIN"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"failOnDataLoss"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"false"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"subscribe"&lt;/SPAN&gt;&lt;SPAN&gt;, source_kafka_topic)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"startingOffsets"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"earliest"&lt;/SPAN&gt;&lt;SPAN&gt;) &amp;nbsp;&lt;/SPAN&gt;&lt;SPAN&gt;# read from specific timestamp (in ms)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;load&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;query &lt;/SPAN&gt;&lt;SPAN&gt;=&lt;/SPAN&gt;&lt;SPAN&gt; (&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; df_stream.writeStream.&lt;/SPAN&gt;&lt;SPAN&gt;format&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"delta"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;outputMode&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"append"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;trigger&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;availableNow&lt;/SPAN&gt;&lt;SPAN&gt;=True&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"checkpointLocation"&lt;/SPAN&gt;&lt;SPAN&gt;, checkpoint)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"mergeSchema"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"true"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"schemaCheck"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"false"&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; .&lt;/SPAN&gt;&lt;SPAN&gt;toTable&lt;/SPAN&gt;&lt;SPAN&gt;(raw_table_name)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;query.&lt;/SPAN&gt;&lt;SPAN&gt;awaitTermination&lt;/SPAN&gt;&lt;SPAN&gt;()&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 19:45:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143643#M52209</guid>
      <dc:creator>vijsharm</dc:creator>
      <dc:date>2026-01-11T19:45:55Z</dc:date>
    </item>
    <item>
      <title>Re: checkpoint changes not working on my databricks job</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143647#M52210</link>
      <description>&lt;P&gt;As I understand, you want to ingest "only&amp;nbsp;&lt;SPAN&gt;new records coming through Kafka via old checkpoint path".&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;I would suggest that you can change&amp;nbsp;&lt;SPAN&gt;.option("startingOffsets", "earliest")&amp;nbsp;to&amp;nbsp;.option("startingOffsets", "latest"). It would start&amp;nbsp;reading from the newest offset and also avoid reprocessing of your old data. However, you would need to delete the old checkpoint path and start your stream after making the change.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;P.S. - Don't try the above directly for production load.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 22:20:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143647#M52210</guid>
      <dc:creator>nikhil_2212</dc:creator>
      <dc:date>2026-01-11T22:20:17Z</dc:date>
    </item>
    <item>
      <title>Re: checkpoint changes not working on my databricks job</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143648#M52211</link>
      <description>&lt;P&gt;So you are saying that first set the&amp;nbsp;&lt;SPAN&gt;.option("startingOffsets", "latest") to read all the latest records and then delete the old checkpoint path and create the new one for it and do you recommend to use the checkpoint like below so it dont happen in the future.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;# Generate a unique suffix using timestamp and UUID&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;unique_suffix = datetime.now().strftime("%Y%m%d_%H%M%S") + "_" + str(uuid.uuid4())&lt;/SPAN&gt;&lt;/DIV&gt;&lt;BR /&gt;&lt;DIV&gt;&lt;SPAN&gt;# Combine to form a unique checkpoint path&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;checkpoint_path = f"{base_checkpoint_dir}/checkpoint_{unique_suffix}"&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 22:41:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143648#M52211</guid>
      <dc:creator>vijsharm</dc:creator>
      <dc:date>2026-01-11T22:41:56Z</dc:date>
    </item>
    <item>
      <title>Re: checkpoint changes not working on my databricks job</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143649#M52212</link>
      <description>&lt;P&gt;When you swapped back to the old checkpoint, were any records flowing through, and were batches completing? It's possible that you've accumulated a big backlog with the old checkpoint, and/or records in Kafka have expired. And the "startingOffsets" option should only affect new checkpoints/streams.&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 22:51:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143649#M52212</guid>
      <dc:creator>cgrant</dc:creator>
      <dc:date>2026-01-11T22:51:41Z</dc:date>
    </item>
    <item>
      <title>Re: checkpoint changes not working on my databricks job</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143651#M52214</link>
      <description>&lt;P&gt;yes i have seen few records flown but there are messages after that too is not coming in there.&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jan 2026 23:12:37 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/143651#M52214</guid>
      <dc:creator>vijsharm</dc:creator>
      <dc:date>2026-01-11T23:12:37Z</dc:date>
    </item>
    <item>
      <title>Hi @vijsharm, This is a common scenario when working with...</title>
      <link>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/150256#M53319</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/169547"&gt;@vijsharm&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;This is a common scenario when working with Structured Streaming checkpoints and Kafka. Here is what is happening and how to resolve it.&lt;/P&gt;
&lt;P&gt;WHAT HAPPENED&lt;/P&gt;
&lt;P&gt;When you switched to a new checkpoint path, the streaming query had no prior offset information, so startingOffsets = "earliest" kicked in and reprocessed everything from the beginning of the Kafka topic. That part is expected.&lt;/P&gt;
&lt;P&gt;When you switched back to the original checkpoint path, the query resumed from the last committed offset stored in that original checkpoint. The key detail: startingOffsets is only used when a query starts for the very first time with no existing checkpoint. On all subsequent runs, the query always resumes from the offsets recorded in the checkpoint, ignoring startingOffsets entirely.&lt;/P&gt;
&lt;P&gt;So the original checkpoint still has the old offsets from before you switched away. If new records arrived in Kafka while you were using the alternate checkpoint, the original checkpoint does not know about them unless those offsets are still within range.&lt;/P&gt;
&lt;P&gt;WHY NEW RECORDS ARE NOT APPEARING&lt;/P&gt;
&lt;P&gt;There are a few likely causes:&lt;/P&gt;
&lt;P&gt;1. Kafka retention expired the messages. If your Kafka topic has a retention period (e.g., 7 days) and the original checkpoint was stale for longer than that, the offsets it recorded may now point to data that Kafka has already purged. Since you have failOnDataLoss = "false", Spark silently skips over the gap rather than throwing an error, and it may appear that no new data is coming in.&lt;/P&gt;
&lt;P&gt;2. The checkpoint committed offsets that already cover the new data. If the alternate checkpoint processed data past the point the original checkpoint left off, and then you switched back, the original checkpoint's offsets are behind. The query should pick up from those older offsets, but if Kafka has deleted those segments, the stream jumps ahead and may land at the current end with nothing new to process.&lt;/P&gt;
&lt;P&gt;3. Offset metadata mismatch. If the Kafka topic was reconfigured (partitions added, topic recreated, etc.) while the checkpoint was in use elsewhere, the stored partition/offset mapping may no longer align with the current topic state.&lt;/P&gt;
&lt;P&gt;HOW TO FIX THIS&lt;/P&gt;
&lt;P&gt;Option A: Start fresh with a new checkpoint (recommended)&lt;/P&gt;
&lt;P&gt;Use a brand-new checkpoint location and set startingOffsets to "latest" so you only pick up new records going forward. This avoids reprocessing and gives you a clean slate:&lt;/P&gt;
&lt;PRE&gt;checkpoint_new = get_checkpoint_path(source_kafka_topic + "/" + process_name + "_v2")

query = (
    df_stream.writeStream.format("delta")
    .outputMode("append")
    .trigger(availableNow=True)
    .option("checkpointLocation", checkpoint_new)
    .option("startingOffsets", "latest")
    .option("mergeSchema", "true")
    .option("schemaCheck", "false")
    .toTable(raw_table_name)
)
query.awaitTermination()&lt;/PRE&gt;
&lt;P&gt;If you need to backfill the gap between when the original checkpoint stopped and now, you can run a separate batch read from Kafka using startingOffsets and endingOffsets with specific timestamps or offset values to fill in the missing window.&lt;/P&gt;
&lt;P&gt;Option B: Start fresh with "earliest" if you can tolerate reprocessing&lt;/P&gt;
&lt;P&gt;If your downstream pipeline is idempotent and can handle duplicate records, you can use a new checkpoint with startingOffsets = "earliest". This reprocesses everything but guarantees nothing is missed. To avoid duplicates in your Delta table, consider adding a MERGE or deduplication step downstream.&lt;/P&gt;
&lt;P&gt;Option C: Inspect and adjust the checkpoint offsets (advanced)&lt;/P&gt;
&lt;P&gt;You can examine the checkpoint's offsets directory to see what offsets were last committed:&lt;/P&gt;
&lt;PRE&gt;%fs ls &amp;lt;your_checkpoint_path&amp;gt;/offsets/&lt;/PRE&gt;
&lt;P&gt;Look at the latest offset file to see the recorded partition offsets. Compare these with the current Kafka topic offsets (earliest and latest) to understand the gap. If the committed offsets are still within Kafka's retention window, the query should resume correctly on the next trigger.&lt;/P&gt;
&lt;P&gt;BEST PRACTICES FOR CHECKPOINT MANAGEMENT&lt;/P&gt;
&lt;P&gt;- Never share a checkpoint location between different streaming queries.&lt;BR /&gt;
- Never reuse a checkpoint after significant time gaps if Kafka retention might have expired the data.&lt;BR /&gt;
- When you need to reset a stream, always create a new checkpoint path rather than deleting or reusing an old one.&lt;BR /&gt;
- Use failOnDataLoss = "true" (the default) during development so you get clear errors when offsets fall out of Kafka's retention window, rather than silently skipping data.&lt;BR /&gt;
- Consider versioning your checkpoint paths (e.g., appending _v1, _v2) so you can always trace which checkpoint belongs to which configuration.&lt;/P&gt;
&lt;P&gt;DOCUMENTATION REFERENCES&lt;/P&gt;
&lt;P&gt;Structured Streaming checkpoints:&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/aws/en/structured-streaming/checkpoints.html" target="_blank"&gt;https://docs.databricks.com/aws/en/structured-streaming/checkpoints.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Kafka Structured Streaming configuration (startingOffsets behavior):&lt;BR /&gt;
&lt;A href="https://docs.databricks.com/aws/en/structured-streaming/kafka.html" target="_blank"&gt;https://docs.databricks.com/aws/en/structured-streaming/kafka.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.&lt;/P&gt;
&lt;P&gt;If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.&lt;/P&gt;</description>
      <pubDate>Sun, 08 Mar 2026 20:51:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/checkpoint-changes-not-working-on-my-databricks-job/m-p/150256#M53319</guid>
      <dc:creator>SteveOstrowski</dc:creator>
      <dc:date>2026-03-08T20:51:47Z</dc:date>
    </item>
  </channel>
</rss>

