<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150851#M53532</link>
    <description>&lt;P&gt;severless compute does not support&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;spark.&lt;/SPAN&gt;&lt;SPAN&gt;conf&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;set&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;spark.sql.streaming.stateStore.stateSchemaCheck&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;false&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;),&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;If using classic compute, this setting will disable strict schema validation.&lt;BR /&gt;&lt;BR /&gt;I am looking for a solution that allows us to use serverless at silver.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Thanks!&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;</description>
    <pubDate>Fri, 13 Mar 2026 19:24:18 GMT</pubDate>
    <dc:creator>cdn_yyz_yul</dc:creator>
    <dc:date>2026-03-13T19:24:18Z</dc:date>
    <item>
      <title>schema evolution with structured streaming: upstream schema change causes downstream writer fails.</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150850#M53531</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;Bronze: use classic or job compute,&amp;nbsp; Autoloader with&lt;SPAN&gt;.option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;mergeSchema&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;true&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;). Schema evolution works correctly. data goes to bronze.my_bronze_table.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Silver: uses serverless compute, reader reads&amp;nbsp;&lt;SPAN&gt;bronze.my_bronze_table, does all necessary transformation, writer creates silver.my_silver_table, which has defined schema.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;The problem I am trying to resolve:&lt;BR /&gt;whenever the schema of&amp;nbsp;bronze.my_bronze_table is changed (thanks to schema evolution), the writer fails to write&amp;nbsp;silver.my_silver_table due to&amp;nbsp;[STATE_STORE_VALUE_SCHEMA_NOT_COMPATIBLE] The provided value schema does not match existing schema in operator state.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;The detailed debug log says clearly that the mismatch is in the schema of bronze.my_bronze_table.&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;As the silver uses serverless compute, I can not set&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;spark.databricks.delta.schema.autoMerge.enabled&lt;/SPAN&gt;&lt;SPAN&gt;&lt;SPAN&gt;",&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;option&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;mergeSchema&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;true&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;) by itself does not work for the silver table writer.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;The current workaround is to delete the checkpoint of&amp;nbsp;silver.my_silver_table.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Is there a better solution to this problem?&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Thank you!&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 13 Mar 2026 19:00:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150850#M53531</guid>
      <dc:creator>cdn_yyz_yul</dc:creator>
      <dc:date>2026-03-13T19:00:56Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150851#M53532</link>
      <description>&lt;P&gt;severless compute does not support&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;spark.&lt;/SPAN&gt;&lt;SPAN&gt;conf&lt;/SPAN&gt;&lt;SPAN&gt;.&lt;/SPAN&gt;&lt;SPAN&gt;set&lt;/SPAN&gt;&lt;SPAN&gt;(&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;spark.sql.streaming.stateStore.stateSchemaCheck&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;false&lt;/SPAN&gt;&lt;SPAN&gt;"&lt;/SPAN&gt;&lt;SPAN&gt;),&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;If using classic compute, this setting will disable strict schema validation.&lt;BR /&gt;&lt;BR /&gt;I am looking for a solution that allows us to use serverless at silver.&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;Thanks!&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Fri, 13 Mar 2026 19:24:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150851#M53532</guid>
      <dc:creator>cdn_yyz_yul</dc:creator>
      <dc:date>2026-03-13T19:24:18Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150998#M53556</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/145837"&gt;@cdn_yyz_yul&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Because the silver stream runs on serverless, you can’t relax state-store schema checks or set custom Spark configs. When the upstream bronze table schema evolves in a way that changes the schema of any stateful operator, the streaming query will correctly fail with STATE_STORE_VALUE_SCHEMA_NOT_COMPATIBLE.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;On serverless, the supported pattern is to treat this as a &lt;/SPAN&gt;breaking change&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL class="p8i6j02"&gt;
&lt;LI class="p8i6j0a"&gt;Stop the silver streaming job.&lt;/LI&gt;
&lt;LI class="p8i6j0a"&gt;Update your code/schema for the new upstream schema.&lt;/LI&gt;
&lt;LI class="p8i6j0a"&gt;Restart the stream with a new checkpoint location (or delete/rotate the existing checkpoint) so that the state is rebuilt with the new schema.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;This is expected behaviour. The documentation &lt;A href="https://docs.databricks.com/aws/en/spark/conf#configure-spark-properties-for-serverless-notebooks-and-jobs" target="_blank"&gt;here&lt;/A&gt; lists the Spark properties you can configure on Serverless. It also says the below...&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SparkConfigServerless.png" style="width: 999px;"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/24868i60C69896BA8D01C7/image-size/large?v=v2&amp;amp;px=999" role="button" title="SparkConfigServerless.png" alt="SparkConfigServerless.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;You can also refer to this &lt;A href="https://docs.databricks.com/aws/en/compute/serverless/limitations" target="_self"&gt;page&lt;/A&gt;, which lists Serverless's limitations. It'll again take you to the same page mentioned above.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;If you want to avoid rebuilding state on every breaking schema change, you can run the silver stream on &lt;/SPAN&gt;classic/job compute&lt;SPAN&gt; instead of serverless, where you can control Spark configs and design more flexible state-handling. However, I appreciate that it may not be what you are looking for.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="p1"&gt;&lt;FONT size="2" color="#FF6600"&gt;&lt;STRONG&gt;&lt;I&gt;If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.&lt;/I&gt;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 15 Mar 2026 21:19:40 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/150998#M53556</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-03-15T21:19:40Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151002#M53558</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/216690"&gt;@Ashwin_DSA&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I started to test: changing the bronze layer from addNewColumns to rescure (schemaEvolutionMode) while still keeping the format setting of autoloader as.csv. Then processing the _rescued data at silver.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Or, the last resort, having autoloader to use format text, then processing and extract columns at silver layer.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I will need to do some testing to finalize which method is simpler.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2026 00:44:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151002#M53558</guid>
      <dc:creator>cdn_yyz_yul</dc:creator>
      <dc:date>2026-03-16T00:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151031#M53562</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/145837"&gt;@cdn_yyz_yul&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Great. Both of the approaches you’re testing are reasonable ways to protect your stateful silver stream from upstream schema changes on serverless...&lt;/P&gt;
&lt;P&gt;On the first option, with schemaEvolutionMode = "rescue" on CSV... you can keep Auto Loader in CSV but switch to..&amp;nbsp;&lt;/P&gt;
&lt;DIV class="l8rrz21 _1ibi0s3dn" data-ui-element="code-block-container"&gt;
&lt;PRE&gt;&lt;CODE class="markdown-code-python p8i6j0e hljs language-python _12n1b832"&gt;.option(&lt;SPAN class="hljs-string"&gt;"cloudFiles.schemaEvolutionMode"&lt;/SPAN&gt;, &lt;SPAN class="hljs-string"&gt;"rescue"&lt;/SPAN&gt;)
.option(&lt;SPAN class="hljs-string"&gt;"rescuedDataColumn"&lt;/SPAN&gt;, &lt;SPAN class="hljs-string"&gt;"_rescued_data"&lt;/SPAN&gt;)&lt;/CODE&gt;&lt;/PRE&gt;
&lt;DIV&gt;
&lt;DIV&gt;This keeps the bronze table schema stable. Any new/unknown columns are captured as JSON in _rescued_data instead of becoming new physical columns.&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;In silver, you first&amp;nbsp;do stateless parsing/flattening of _rescued_data into whatever extra fields you need and feed only a stable, fixed projection of columns into the stateful part of the query (aggregations/joins, etc.).&lt;/P&gt;
&lt;P&gt;As long as the columns that participate in stateful operations don’t change schema, the state-store schema stays compatible, and you avoid STATE_STORE_*_SCHEMA_NOT_COMPATIBLE without needing any forbidden Spark confs on serverless.&lt;/P&gt;
&lt;P&gt;The other option, where you are talking about keeping the format as text and parsing at silver, this is&amp;nbsp;a more extreme version of the same idea. It does give you maximum isolation from source changes, but you lose CSV parsing at bronze and push more work into silver. You’d still want the same pattern... as in&amp;nbsp;parse in a stateless layer, then project into a stable set of columns before any stateful operators.&lt;/P&gt;
&lt;P class="p8i6j01 paragraph"&gt;Something to remember here is that even with these patterns, any intentional change to the schema of your stateful part in Silver (for example, adding new grouping keys or changing types used in aggregations/joins) is still a breaking state change and requires starting that stream with a new checkpoint location. That’s independent of serverless. It’s how Structured Streaming state recovery works.&lt;/P&gt;
&lt;P class="p8i6j01 paragraph"&gt;Hope this helps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 16 Mar 2026 09:23:41 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151031#M53562</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-03-16T09:23:41Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151090#M53574</link>
      <description>&lt;P&gt;The first option, setting the bronze table to use rescue.&amp;nbsp;&lt;BR /&gt;In silver layer,&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;- read the table from bronze&lt;/LI&gt;&lt;LI&gt;processed the json in _rescued_data&lt;/LI&gt;&lt;LI&gt;added rescued columns as new columns to the dataframe in silver, e.g., silver_df1.&lt;/LI&gt;&lt;LI&gt;I do&amp;nbsp;&lt;STRONG&gt;&lt;FONT color="#333333"&gt; need to join (silver_df1 with another existing df) to get "enriched_df1"&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/LI&gt;&lt;LI&gt;then, did a stack to "unpivot" the enriched_df1, turning some columns, including the rescued columns into rows. gets final_silver_df. (the unpivot method is similar to&amp;nbsp;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&lt;A href="https://community.databricks.com/t5/data-engineering/using-quot-select-expr-quot-and-quot-stack-quot-to-unpivot/m-p/67987#M33506)" rel="noopener" target="_blank"&gt;https://community.databricks.com/t5/data-engineering/using-quot-select-expr-quot-and-quot-stack-quot-to-unpivot/m-p/67987#M33506)&lt;/A&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;UL&gt;&lt;LI&gt;&lt;SPAN&gt;At this stage, the schema of the final_silver_df is the same as before, but writing to the silver delta table (which is created before columns are rescued in bronze) still fails with schema mismatch.&amp;nbsp;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/DIV&gt;</description>
      <pubDate>Mon, 16 Mar 2026 23:27:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151090#M53574</guid>
      <dc:creator>cdn_yyz_yul</dc:creator>
      <dc:date>2026-03-16T23:27:24Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151108#M53584</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/145837"&gt;@cdn_yyz_yul&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;In your new version of the silver stream, you’ve added extra transformations (parsing _rescued_data, join, unpivot). Even though final_silver_df has the same columns as before, this changes the internal schema of at least one stateful operator compared to what’s stored in the checkpoint. As per the structured streaming docs, any change to stateful operations (agg, dedupe, stream‑stream joins, etc.) between restarts from the same checkpoint is unsupported and will fail with a state schema compatibility error.&amp;nbsp;That’s exactly what you’re seeing.&lt;/P&gt;
&lt;P&gt;Because you’re on serverless, there’s no supported way to relax this check with Spark configs.&amp;nbsp;The supported migration path is:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;Deploy the updated silver job (with _rescued_data handling, join, unpivot).&lt;/LI&gt;
&lt;LI&gt;Run it with a new checkpointLocation so it rebuilds its state from scratch.&lt;/LI&gt;
&lt;LI&gt;Keep the subset of columns that participate in stateful ops stable going forward. Handle future schema drift only in a stateless part of the pipeline (e.g., parse _rescued_data, unpivot, then project back to the same fixed set of stateful columns before any aggregations/joins).&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;After this one‑time reset, subsequent upstream schema changes in bronze will be absorbed by _rescued_data, and your stateful part won’t need to change, so you won’t hit the state schema error again.&lt;/P&gt;
&lt;P&gt;Try this first... print the schema of final_silver_df and compare it to DESCRIBE TABLE silver.my_silver_table.&amp;nbsp;If they match, the error is definitely about the state, not the table schema. You can also t&lt;SPAN&gt;emporarily run the updated query with the same write path but a &lt;/SPAN&gt;fresh checkpoint dir&lt;SPAN&gt; to confirm it starts and writes successfully. That’s a quick proof that the mismatch is in the checkpointed state.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Hope this helps.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Mar 2026 06:54:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151108#M53584</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-03-17T06:54:38Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151142#M53597</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/216690"&gt;@Ashwin_DSA&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Continue the topic, given the context discussed, when there is stateful operations in silver, the mismatch is definitely in the checkpointed state, not in table schema.&lt;/P&gt;&lt;P&gt;I further tested it by removing the join (which is the single stateful operations) from silver transformation, keeping the _rescued_data handling and unpivot as-is,&lt;BR /&gt;stream writer at silver writes correctly the dalta table with the same checkpoint location.&lt;/P&gt;&lt;P&gt;Summary of my test is:&lt;BR /&gt;- bronze: reads first buntch of .csv, which has schema1. (set to use _rescued_data)&lt;BR /&gt;- silver: reads from bronze, transform including unpivot, write to silver.mytable using checkpoint location mychechpoint&lt;/P&gt;&lt;P&gt;- bronze: put more .csv to the source location, these new .csv has extra columns.&lt;BR /&gt;- bronze: reads the sencond buntch of .csv , new columns are saved to _rescued_data by auto loader&lt;BR /&gt;- silver: run the same code as above. -- reads, transforms (including unpivot), write to silver.mytable using checkpoint location mychechpoint&lt;/P&gt;&lt;P&gt;- verify the new rows in silver.mytable.&lt;/P&gt;&lt;P&gt;=== This is to confirm the cause of the mismatch, with testing.&lt;BR /&gt;In production, the join is to introduce meaningful identifier, i.e., primary key, to rows in .csv. I can not remove it.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Now, what are the recommended design for such scenarios?&lt;/P&gt;&lt;P&gt;You mentioned:&lt;/P&gt;&lt;P&gt;1) Deploy the updated silver job (with _rescued_data handling, join, unpivot).&lt;BR /&gt;2) Run it with a new checkpointLocation so it rebuilds its state from scratch.&lt;/P&gt;&lt;P&gt;This is essentially the same as using cloudFiles.schemaEvolutionMode = addNewColumns, in the sense that a new checkpointLocation is required.&lt;BR /&gt;Given this behavior, I would consider using addNewColumns is simpler since I do not have to handing _reascued_data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3)Keep the subset of columns that participate in stateful ops stable going forward.&lt;BR /&gt;Handle future schema drift only in a stateless part of the pipeline&lt;BR /&gt;(e.g., parse _rescued_data, unpivot, then project back to the same fixed set of stateful columns before any aggregations/joins).&lt;/P&gt;&lt;P&gt;In production, the stateful operation (join) can not be removed.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Summary of my understanding:&lt;/P&gt;&lt;P&gt;- bronze: let auto loader infer and evolve schema. -- The job will fail when autoloader finds new columns, rerun will be fine&lt;BR /&gt;- silver: job will fail due to checkpointed state mis-match, --&amp;gt; remove existing checkpoint, (optionally remove the delta), rerun.&lt;/P&gt;&lt;P&gt;This is what we do currently. I am hoping to find a way so the "remove existing checkpoint" could be avoid.&lt;BR /&gt;But, after our discussion and my testing, it seems what we are doing is the most pragmatic solution.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any other suggestions would be appreciated. I am interested in how schema changes should be handled when using structured streaming.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;P.S. regarding handling/parsing _rescued_data:&lt;BR /&gt;trying to dynamically adding new columns reliably requires work.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Mar 2026 14:13:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151142#M53597</guid>
      <dc:creator>cdn_yyz_yul</dc:creator>
      <dc:date>2026-03-17T14:13:27Z</dc:date>
    </item>
    <item>
      <title>Re: schema evolution with structured streaming: upstream schema change causes downstream writer fail</title>
      <link>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151166#M53605</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/145837"&gt;@cdn_yyz_yul&lt;/a&gt;,&lt;/P&gt;
&lt;P&gt;Your experiment confirms the key point that with only _rescued_data handling + unpivot, the silver stream is effectively stateless, so reusing the same checkpoint works fine, even as bronze evolves.&amp;nbsp;As soon as you add the join, you introduce state, and when the upstream change affects the join’s input schema, the state-store schema no longer matches what’s stored under the existing checkpoint. The restart from that checkpoint then correctly fails with a state schema mismatch, even though the final DataFrame/table schema looks unchanged.&lt;/P&gt;
&lt;P&gt;On serverless, you also can’t bypass these checks using Spark configurations (no spark.sql.streaming.stateStore.*, no delta auto‑merge configs), so there isn’t a supported way to automatically evolve the state while keeping the same checkpoint.&amp;nbsp;I think what you’re doing today is therefore the pragmatic, supported pattern...&lt;/P&gt;
&lt;P&gt;And to your specific question about using _rescued_data vs. addNewColumns, they are equivalent for the checkpoint behaviour in&amp;nbsp;silver layer.&amp;nbsp;The reasons you might still choose one over the other are table‑design/downstream concerns, not streaming‑state concerns.&lt;/P&gt;
&lt;P&gt;From my perspective,&amp;nbsp;addNewColumns approach means s&lt;SPAN&gt;impler code, no JSON parsing. Howeverm&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;your bronze (and possibly silver) schema keeps &lt;/SPAN&gt;growing wider&lt;SPAN&gt; as every new field becomes a real column.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;If you use rescue&amp;nbsp;+ _rescued_data, it k&lt;SPAN&gt;eeps the physical schema more &lt;/SPAN&gt;stable&lt;SPAN&gt; and captures unexpected fields and type mismatches into one semi‑structured column. Bu&lt;/SPAN&gt;&lt;SPAN&gt;t you pay with extra parsing logic if you want to promote some of those rescued fields.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;So if your main goal is to minimise implementation complexity, and you’re already accepting that a new silver checkpoint is needed on those breaking changes, then preferring schemaEvolutionMode = "addNewColumns" is a perfectly reasonable and simpler choice.&lt;/P&gt;
&lt;P&gt;Hope this helps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 17 Mar 2026 16:48:27 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/schema-evolution-with-structured-streaming-upstream-schema/m-p/151166#M53605</guid>
      <dc:creator>Ashwin_DSA</dc:creator>
      <dc:date>2026-03-17T16:48:27Z</dc:date>
    </item>
  </channel>
</rss>

