<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Schema Evolution and Schema Enforcement without Delta live Tables &amp;amp; Unity catalog in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156844#M11775</link>
    <description>&lt;P&gt;Here are the answers to your questions:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Is Delta’s restriction a design decision?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Yes.&lt;/STRONG&gt; In Delta, &lt;CODE&gt;mergeSchema&lt;/CODE&gt; is mainly for &lt;STRONG&gt;schema evolution by adding columns&lt;/STRONG&gt;; type changes are still controlled by &lt;STRONG&gt;schema enforcement&lt;/STRONG&gt; unless the change qualifies for &lt;STRONG&gt;type widening&lt;/STRONG&gt;. If the mismatch does not meet type-widening conditions, Delta follows normal enforcement rules instead of silently changing the column type.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Can append mode handle type changes without pre-casting or full overwrite?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Yes, but only for supported widening changes&lt;/STRONG&gt; such as &lt;CODE&gt;INT -&amp;gt; BIGINT&lt;/CODE&gt;, and only when the target table has &lt;CODE&gt;delta.enableTypeWidening = true&lt;/CODE&gt; and schema evolution is enabled on the write.&lt;BR /&gt;For &lt;STRONG&gt;&lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt;&lt;/STRONG&gt;, that is &lt;STRONG&gt;not&lt;/STRONG&gt; a supported automatic widening path; docs explicitly call it an &lt;STRONG&gt;unsupported data type change&lt;/STRONG&gt; in Auto Loader, where it gets rescued instead of widened.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;What Git repo code result shows the intended pattern?&lt;/STRONG&gt;&lt;BR /&gt;A GitHub code file in &lt;CODE&gt;databrickslabs/lakebridge&lt;/CODE&gt; uses the exact Delta pattern of &lt;STRONG&gt;enabling type widening first&lt;/STRONG&gt;, then altering column types:&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;PRE&gt;&lt;CODE class="language-python"&gt;sqls: list | None = [
  f"ALTER TABLE {table_identifier} SET TBLPROPERTIES ('delta.enableTypeWidening' = 'true')",
  f"ALTER TABLE {table_identifier} ALTER COLUMN recon_metrics.row_comparison.missing_in_source TYPE BIGINT",
  f"ALTER TABLE {table_identifier} ALTER COLUMN recon_metrics.row_comparison.missing_in_target TYPE BIGINT",
]
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;What is the recommended production approach for dynamic schema drift?&lt;/STRONG&gt;&lt;BR /&gt;Use a &lt;STRONG&gt;Bronze/Silver&lt;/STRONG&gt; pattern with &lt;STRONG&gt;Auto Loader&lt;/STRONG&gt;. By default, Auto Loader is designed to avoid breaking on type mismatches: for text formats it infers columns as &lt;STRONG&gt;STRING&lt;/STRONG&gt;, and with rescue modes it places unsupported type-change values into the &lt;STRONG&gt;rescued data column&lt;/STRONG&gt; instead of failing the pipeline.&lt;BR /&gt;If you want automatic widening for compatible changes, use &lt;STRONG&gt;&lt;CODE&gt;addNewColumnsWithTypeWidening&lt;/CODE&gt;&lt;/STRONG&gt; plus &lt;CODE&gt;delta.enableTypeWidening=true&lt;/CODE&gt;; unsupported changes like &lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt; should be &lt;STRONG&gt;rescued/quarantined and normalized downstream&lt;/STRONG&gt; rather than forced into the Delta target schema during append.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;New columns&lt;/STRONG&gt; → use &lt;CODE&gt;mergeSchema&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Widenable type changes&lt;/STRONG&gt; → enable &lt;STRONG&gt;type widening&lt;/STRONG&gt; and keep append mode.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Non-widening changes like &lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt;&lt;/STRONG&gt; → do &lt;STRONG&gt;not&lt;/STRONG&gt; rely on Delta &lt;CODE&gt;mergeSchema&lt;/CODE&gt;; land raw data, rescue the bad values, and reconcile/cast in a downstream layer, or explicitly alter/overwrite the table schema when you choose to accept the change.&lt;/LI&gt;
&lt;/UL&gt;</description>
    <pubDate>Wed, 13 May 2026 15:28:33 GMT</pubDate>
    <dc:creator>Lu_Wang_ENB_DBX</dc:creator>
    <dc:date>2026-05-13T15:28:33Z</dc:date>
    <item>
      <title>Schema Evolution and Schema Enforcement without Delta live Tables &amp; Unity catalog</title>
      <link>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156830#M11774</link>
      <description>&lt;P&gt;In Delta Lake, schema evolution with mergeSchema handles column additions perfectly — new columns get added and old rows get NULL. But when there is a data type change in the incoming data (for example, a column that was INT now coming as STRING from the source), mergeSchema throws an error even in append mode. However, in formats like ORC, Avro, and Parquet, mergeSchema handles both column additions and data type changes without any issue. So my question is — is this data type restriction in Delta's mergeSchema a design decision to protect existing data integrity, or is there a way to handle data type changes in append mode without resorting to casting before the write or doing a full overwrite? Also, in a production pipeline where the source schema keeps changing dynamically and we cannot hardcode the schema, what is the recommended approach to handle data type changes gracefully without breaking the pipeline?&lt;/P&gt;&lt;P&gt;#deltalake #Schema&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 14:17:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156830#M11774</guid>
      <dc:creator>Rupa0503</dc:creator>
      <dc:date>2026-05-13T14:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: Schema Evolution and Schema Enforcement without Delta live Tables &amp; Unity catalog</title>
      <link>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156844#M11775</link>
      <description>&lt;P&gt;Here are the answers to your questions:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Is Delta’s restriction a design decision?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Yes.&lt;/STRONG&gt; In Delta, &lt;CODE&gt;mergeSchema&lt;/CODE&gt; is mainly for &lt;STRONG&gt;schema evolution by adding columns&lt;/STRONG&gt;; type changes are still controlled by &lt;STRONG&gt;schema enforcement&lt;/STRONG&gt; unless the change qualifies for &lt;STRONG&gt;type widening&lt;/STRONG&gt;. If the mismatch does not meet type-widening conditions, Delta follows normal enforcement rules instead of silently changing the column type.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Can append mode handle type changes without pre-casting or full overwrite?&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Yes, but only for supported widening changes&lt;/STRONG&gt; such as &lt;CODE&gt;INT -&amp;gt; BIGINT&lt;/CODE&gt;, and only when the target table has &lt;CODE&gt;delta.enableTypeWidening = true&lt;/CODE&gt; and schema evolution is enabled on the write.&lt;BR /&gt;For &lt;STRONG&gt;&lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt;&lt;/STRONG&gt;, that is &lt;STRONG&gt;not&lt;/STRONG&gt; a supported automatic widening path; docs explicitly call it an &lt;STRONG&gt;unsupported data type change&lt;/STRONG&gt; in Auto Loader, where it gets rescued instead of widened.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;What Git repo code result shows the intended pattern?&lt;/STRONG&gt;&lt;BR /&gt;A GitHub code file in &lt;CODE&gt;databrickslabs/lakebridge&lt;/CODE&gt; uses the exact Delta pattern of &lt;STRONG&gt;enabling type widening first&lt;/STRONG&gt;, then altering column types:&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;PRE&gt;&lt;CODE class="language-python"&gt;sqls: list | None = [
  f"ALTER TABLE {table_identifier} SET TBLPROPERTIES ('delta.enableTypeWidening' = 'true')",
  f"ALTER TABLE {table_identifier} ALTER COLUMN recon_metrics.row_comparison.missing_in_source TYPE BIGINT",
  f"ALTER TABLE {table_identifier} ALTER COLUMN recon_metrics.row_comparison.missing_in_target TYPE BIGINT",
]
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;OL start="4"&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;What is the recommended production approach for dynamic schema drift?&lt;/STRONG&gt;&lt;BR /&gt;Use a &lt;STRONG&gt;Bronze/Silver&lt;/STRONG&gt; pattern with &lt;STRONG&gt;Auto Loader&lt;/STRONG&gt;. By default, Auto Loader is designed to avoid breaking on type mismatches: for text formats it infers columns as &lt;STRONG&gt;STRING&lt;/STRONG&gt;, and with rescue modes it places unsupported type-change values into the &lt;STRONG&gt;rescued data column&lt;/STRONG&gt; instead of failing the pipeline.&lt;BR /&gt;If you want automatic widening for compatible changes, use &lt;STRONG&gt;&lt;CODE&gt;addNewColumnsWithTypeWidening&lt;/CODE&gt;&lt;/STRONG&gt; plus &lt;CODE&gt;delta.enableTypeWidening=true&lt;/CODE&gt;; unsupported changes like &lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt; should be &lt;STRONG&gt;rescued/quarantined and normalized downstream&lt;/STRONG&gt; rather than forced into the Delta target schema during append.&lt;/P&gt;
&lt;/LI&gt;
&lt;LI&gt;
&lt;P&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;/P&gt;
&lt;/LI&gt;
&lt;/OL&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;STRONG&gt;New columns&lt;/STRONG&gt; → use &lt;CODE&gt;mergeSchema&lt;/CODE&gt;.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Widenable type changes&lt;/STRONG&gt; → enable &lt;STRONG&gt;type widening&lt;/STRONG&gt; and keep append mode.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Non-widening changes like &lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt;&lt;/STRONG&gt; → do &lt;STRONG&gt;not&lt;/STRONG&gt; rely on Delta &lt;CODE&gt;mergeSchema&lt;/CODE&gt;; land raw data, rescue the bad values, and reconcile/cast in a downstream layer, or explicitly alter/overwrite the table schema when you choose to accept the change.&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Wed, 13 May 2026 15:28:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156844#M11775</guid>
      <dc:creator>Lu_Wang_ENB_DBX</dc:creator>
      <dc:date>2026-05-13T15:28:33Z</dc:date>
    </item>
    <item>
      <title>Re: Schema Evolution and Schema Enforcement without Delta live Tables &amp; Unity catalog</title>
      <link>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156852#M11776</link>
      <description>&lt;P&gt;Is it okay if we define the schema manually for production, since we are not using Auto Loader?&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 17:30:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156852#M11776</guid>
      <dc:creator>Rupa0503</dc:creator>
      <dc:date>2026-05-13T17:30:20Z</dc:date>
    </item>
    <item>
      <title>Re: Schema Evolution and Schema Enforcement without Delta live Tables &amp; Unity catalog</title>
      <link>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156854#M11777</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Yes — defining the schema manually for production is okay&amp;nbsp;&lt;/STRONG&gt;when you are &lt;STRONG&gt;not using Auto Loader&lt;/STRONG&gt;.&lt;/P&gt;
&lt;P&gt;With a &lt;STRONG&gt;manually provided schema&lt;/STRONG&gt;, you should expect &lt;STRONG&gt;stricter enforcement&lt;/STRONG&gt;: Delta will not automatically absorb non-widening type changes like &lt;STRONG&gt;&lt;CODE&gt;INT -&amp;gt; STRING&lt;/CODE&gt;&lt;/STRONG&gt; in append mode.&lt;/P&gt;
&lt;P&gt;So the practical recommendation is:&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;STRONG&gt;Use a fixed contract at the ingestion boundary&lt;/STRONG&gt; if your downstream table is a curated production Delta table.&lt;/LI&gt;
&lt;LI&gt;&lt;STRONG&gt;Handle drift before the final write&lt;/STRONG&gt; — either by:
&lt;UL&gt;
&lt;LI&gt;normalizing/casting in code, or&lt;/LI&gt;
&lt;LI&gt;landing raw data in a staging/bronze table and quarantining bad/type-drifted records for later reconciliation.&lt;/LI&gt;
&lt;/UL&gt;
&lt;/LI&gt;
&lt;LI&gt;If the drift is only a &lt;STRONG&gt;supported widening change&lt;/STRONG&gt;, you can enable &lt;STRONG&gt;type widening&lt;/STRONG&gt; on the Delta table; otherwise, manual schema alone will not solve the issue.&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;Summary:&lt;/STRONG&gt; manual schema is&amp;nbsp;&lt;STRONG&gt;not&lt;/STRONG&gt; a graceful solution for arbitrary source type changes by itself.&lt;/P&gt;</description>
      <pubDate>Wed, 13 May 2026 18:07:59 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/schema-evolution-and-schema-enforcement-without-delta-live/m-p/156854#M11777</guid>
      <dc:creator>Lu_Wang_ENB_DBX</dc:creator>
      <dc:date>2026-05-13T18:07:59Z</dc:date>
    </item>
  </channel>
</rss>

