<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SQL schemas migration in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/151540#M53658</link>
    <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Good question, and a pretty common pain point once you start promoting to higher environments.&lt;/P&gt;
&lt;P&gt;Databricks doesn't ship a dedicated schema migration framework, so the standard approach is what you'd expect: keep your DDL and seed SQL in version control, automate execution in order per environment, and write scripts to be idempotent so they're safe to re-run. You don't need to move to Python just to manage this well.&lt;/P&gt;
&lt;P&gt;For environment layout, separate catalogs or schemas per environment in Unity Catalog with consistent naming works well — e.g. &lt;CODE&gt;dev.analytics.orders&lt;/CODE&gt;, &lt;CODE&gt;test.analytics.orders&lt;/CODE&gt;, &lt;CODE&gt;prod.analytics.orders&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;Store migrations as ordered files:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;CODE&gt;001_create_base_schemas.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;002_create_orders_tables.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;003_seed_reference_data.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Write them idempotent — &lt;CODE&gt;CREATE TABLE IF NOT EXISTS&lt;/CODE&gt;, &lt;CODE&gt;MERGE&lt;/CODE&gt; for seed data instead of plain &lt;CODE&gt;INSERT&lt;/CODE&gt;, &lt;CODE&gt;ALTER TABLE&lt;/CODE&gt; for structural changes where you can.&lt;/P&gt;
&lt;P&gt;Track what's run with a simple migration history table in each environment:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;CREATE TABLE IF NOT EXISTS admin.schema_migrations (
  version STRING,
  applied_at TIMESTAMP
);
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Your deployment job reads which versions are already applied, runs only the new files, and logs a row after each successful migration. Nothing fancy, but it works.&lt;/P&gt;
&lt;P&gt;For the dev → test → prod promotion side, Databricks Asset Bundles (DABs) is worth a look. It's not a schema migration tool on its own, but it handles promoting jobs and pipelines across environments with variable overrides per environment — pairs naturally with the versioned SQL pattern above.&lt;/P&gt;
&lt;P&gt;On Alembic: it works, and you can keep migrations as mostly raw SQL so you retain full control over Delta DDL. Makes sense if your team is already standardized on it elsewhere. If not, the SQL-only approach is simpler and a lot more accessible to folks who aren't deep in Python.&lt;/P&gt;
&lt;P&gt;Bottom line — you don't need Alembic to do this well. Version-controlled SQL + a migration history table + automated execution per environment is a solid, widely-used pattern. Happy to dig into any piece of this further.&lt;/P&gt;
&lt;P&gt;Cheers, Lou&lt;/P&gt;</description>
    <pubDate>Fri, 20 Mar 2026 17:55:47 GMT</pubDate>
    <dc:creator>Louis_Frolio</dc:creator>
    <dc:date>2026-03-20T17:55:47Z</dc:date>
    <item>
      <title>SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/150464#M53422</link>
      <description>&lt;P&gt;Hello Community!&lt;BR /&gt;&lt;BR /&gt;I would like to ask for your recommendation in terms of SQL schemas migration best practice.&amp;nbsp;&lt;BR /&gt;In our project, currently we have different SQL schemas definition and data seeding saved in SQL files. Since we are going to higher environments I would like to ask about the recommendation what is the best way to manage data migration in Databricks? I believe we need similar process to standard data base migrations like e.g. for APIs.&lt;BR /&gt;Shall we consider going to python and e.g. alembic+sql_alchemy for databricks?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot in advance for the response!&lt;/P&gt;</description>
      <pubDate>Tue, 10 Mar 2026 08:06:55 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/150464#M53422</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-03-10T08:06:55Z</dc:date>
    </item>
    <item>
      <title>Re: SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/150526#M53460</link>
      <description>&lt;P&gt;My two cents . Looking for better perspective from others .&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have seen organization used Flyway or Liquibase use for schema management .&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you are looking for databricks native feature you can use DABS to deploy your schema and a python job that runs seed scripts at bundle deployment .You might want to control what sql scripts get run on subsequent runs of the seed job to avoid reloading of the same data .&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Mar 2026 02:59:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/150526#M53460</guid>
      <dc:creator>pradeep_singh</dc:creator>
      <dc:date>2026-03-11T02:59:05Z</dc:date>
    </item>
    <item>
      <title>Re: SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/151540#M53658</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/192995"&gt;@maikel&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;Good question, and a pretty common pain point once you start promoting to higher environments.&lt;/P&gt;
&lt;P&gt;Databricks doesn't ship a dedicated schema migration framework, so the standard approach is what you'd expect: keep your DDL and seed SQL in version control, automate execution in order per environment, and write scripts to be idempotent so they're safe to re-run. You don't need to move to Python just to manage this well.&lt;/P&gt;
&lt;P&gt;For environment layout, separate catalogs or schemas per environment in Unity Catalog with consistent naming works well — e.g. &lt;CODE&gt;dev.analytics.orders&lt;/CODE&gt;, &lt;CODE&gt;test.analytics.orders&lt;/CODE&gt;, &lt;CODE&gt;prod.analytics.orders&lt;/CODE&gt;.&lt;/P&gt;
&lt;P&gt;Store migrations as ordered files:&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;CODE&gt;001_create_base_schemas.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;002_create_orders_tables.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;CODE&gt;003_seed_reference_data.sql&lt;/CODE&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;Write them idempotent — &lt;CODE&gt;CREATE TABLE IF NOT EXISTS&lt;/CODE&gt;, &lt;CODE&gt;MERGE&lt;/CODE&gt; for seed data instead of plain &lt;CODE&gt;INSERT&lt;/CODE&gt;, &lt;CODE&gt;ALTER TABLE&lt;/CODE&gt; for structural changes where you can.&lt;/P&gt;
&lt;P&gt;Track what's run with a simple migration history table in each environment:&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class="language-sql"&gt;CREATE TABLE IF NOT EXISTS admin.schema_migrations (
  version STRING,
  applied_at TIMESTAMP
);
&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;Your deployment job reads which versions are already applied, runs only the new files, and logs a row after each successful migration. Nothing fancy, but it works.&lt;/P&gt;
&lt;P&gt;For the dev → test → prod promotion side, Databricks Asset Bundles (DABs) is worth a look. It's not a schema migration tool on its own, but it handles promoting jobs and pipelines across environments with variable overrides per environment — pairs naturally with the versioned SQL pattern above.&lt;/P&gt;
&lt;P&gt;On Alembic: it works, and you can keep migrations as mostly raw SQL so you retain full control over Delta DDL. Makes sense if your team is already standardized on it elsewhere. If not, the SQL-only approach is simpler and a lot more accessible to folks who aren't deep in Python.&lt;/P&gt;
&lt;P&gt;Bottom line — you don't need Alembic to do this well. Version-controlled SQL + a migration history table + automated execution per environment is a solid, widely-used pattern. Happy to dig into any piece of this further.&lt;/P&gt;
&lt;P&gt;Cheers, Lou&lt;/P&gt;</description>
      <pubDate>Fri, 20 Mar 2026 17:55:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/151540#M53658</guid>
      <dc:creator>Louis_Frolio</dc:creator>
      <dc:date>2026-03-20T17:55:47Z</dc:date>
    </item>
    <item>
      <title>Re: SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/151642#M53671</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp;!&lt;BR /&gt;&lt;BR /&gt;this sounds very nice! I have already DAB implemented! A few questions to the above, but I think I more or less understand the rest. What is your suggestion in terms of keeping the version of migration or what do you mean by this? Is it about the prefix 001, 002 etc? Because I think we do not want to edit existing migrations so there will be always one version. In any change is required it should be done in a separate file:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;001_create_base_schemas.sql
002_create_orders_tables.sql
003_seed_reference_data.sql
004_base_schemas_update.sql&lt;/LI-CODE&gt;&lt;P&gt;Also currently we have yml files (001_create_base_schemas.yml, 002_create_orders_tables.yml etc.) separated per different tables creation and seed (exactly as you mentioned).&lt;BR /&gt;&lt;BR /&gt;Shall we merge it to the one migration job and each of the migrations should be a separate task?&amp;nbsp;&lt;BR /&gt;Additionally every time when new migration file will be added shall we add new task to the migration yml? And in terms of "&lt;SPAN&gt;Your deployment job reads which versions are already applied" shall we check it in every file (at the top) whether it is already in the admin.schema_migrations? Could it be possible for you to give me a small draft how do you imagine this?&lt;BR /&gt;&lt;BR /&gt;Thank you a milion in advance!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 22 Mar 2026 11:29:26 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/151642#M53671</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-03-22T11:29:26Z</dc:date>
    </item>
    <item>
      <title>Re: SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/152202#M53785</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Great question — and since you already have DABs and numbered SQL files, you're most of the way there. You do &lt;/SPAN&gt;&lt;STRONG&gt;not&lt;/STRONG&gt;&lt;SPAN&gt; need Alembic or SQLAlchemy. Here's a concrete implementation of the migration runner pattern that plugs directly into your existing DABs setup.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;The Pattern&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;The idea is simple:&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Keep your numbered SQL migration files as-is (001&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;, 002&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt;, etc.)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Add a &lt;/SPAN&gt;&lt;STRONG&gt;migration history table&lt;/STRONG&gt;&lt;SPAN&gt; per environment to track what's been applied&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Add a &lt;/SPAN&gt;&lt;STRONG&gt;single migration runner task&lt;/STRONG&gt;&lt;SPAN&gt; in your DABs bundle that runs all unapplied migrations in order&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Each migration runs exactly once — no editing old files, new changes go in new files&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;H2&gt;&lt;SPAN&gt;Step 1: Migration History Table&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;This gets created automatically by the runner, but here's what it looks like:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;CREATE TABLE IF NOT EXISTS ${catalog}.admin.schema_migrations (&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;version STRING NOT NULL,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;file_name STRING,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;applied_at TIMESTAMP DEFAULT current_timestamp(),&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;checksum STRING&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;);&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Step 2: Migration Runner (Python Task)&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;Create a file &lt;/SPAN&gt;&lt;SPAN&gt;migrations/run_migrations.py&lt;/SPAN&gt;&lt;SPAN&gt; in your DABs project:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import os&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import hashlib&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;from pyspark.sql import SparkSession&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;spark = SparkSession.builder.getOrCreate()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# These come from DABs variable overrides per environment&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;CATALOG = spark.conf.get("spark.databricks.migration.catalog")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;MIGRATIONS_DIR = spark.conf.get("spark.databricks.migration.dir", "/Workspace/migrations/sql")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def get_applied_versions():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"""Read which migrations have already been applied."""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark.sql(f"""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;CREATE TABLE IF NOT EXISTS {CATALOG}.admin.schema_migrations (&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;version STRING NOT NULL,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;file_name STRING,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;applied_at TIMESTAMP,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;checksum STRING&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;""")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;rows = spark.sql(&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;f"SELECT version FROM {CATALOG}.admin.schema_migrations"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;).collect()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return {row.version for row in rows}&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def get_pending_migrations(applied):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"""Find SQL files that haven't been applied yet, sorted by version prefix."""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;files = []&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for f in sorted(os.listdir(MIGRATIONS_DIR)):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if not f.endswith(".sql"):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;continue&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;version = f.split("_")[0]&amp;nbsp; # e.g. "001" from "001_create_base_schemas.sql"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if version not in applied:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;files.append((version, f))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return files&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def run_migration(version, file_name):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"""Execute a single migration file and record it."""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;path = os.path.join(MIGRATIONS_DIR, file_name)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;with open(path, "r") as fh:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;sql_content = fh.read()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;checksum = hashlib.md5(sql_content.encode()).hexdigest()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Split on semicolons to handle multi-statement files&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;statements = [s.strip() for s in sql_content.split(";") if s.strip()]&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for stmt in statements:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Replace ${catalog} placeholder with actual catalog&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;resolved = stmt.replace("${catalog}", CATALOG)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"&amp;nbsp; Executing: {resolved[:80]}...")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark.sql(resolved)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;# Record successful migration&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark.sql(f"""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;INSERT INTO {CATALOG}.admin.schema_migrations&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;VALUES ('{version}', '{file_name}', current_timestamp(), '{checksum}')&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;""")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"&amp;nbsp; Recorded migration {version}: {file_name}")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def main():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;applied = get_applied_versions()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"Already applied: {sorted(applied)}")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;pending = get_pending_migrations(applied)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if not pending:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print("No new migrations to apply.")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"Applying {len(pending)} migration(s)...")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for version, file_name in pending:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"\n--- Migration {version}: {file_name} ---")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;run_migration(version, file_name)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print("\nAll migrations applied successfully.")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;main()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Step 3: DABs Bundle Configuration&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;In your &lt;/SPAN&gt;&lt;SPAN&gt;databricks.yml&lt;/SPAN&gt;&lt;SPAN&gt;, add the migration runner as a job with environment-specific catalog overrides:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;variables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;catalog:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;default: dev_catalog&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;resources:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;jobs:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;schema_migrations:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;name: "schema-migrations-${bundle.environment}"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;tasks:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;- task_key: run_migrations&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;existing_cluster_id: ${var.cluster_id}&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark_python_task:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;python_file: ./migrations/run_migrations.py&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;parameters: []&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark_conf:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark.databricks.migration.catalog: ${var.catalog}&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;spark.databricks.migration.dir: /Workspace/${workspace.root_path}/migrations/sql&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;environments:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;dev:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;variables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;catalog: dev_catalog&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;test:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;variables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;catalog: test_catalog&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;prod:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;variables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;catalog: prod_catalog&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;Step 4: Your SQL Migration Files&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;SPAN&gt;Keep them exactly as you have them — numbered, one per change, never edited after creation:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;migrations/sql/&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;001_create_base_schemas.sql&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;002_create_orders_table.sql&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;003_seed_reference_data.sql&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;004_add_status_column.sql&amp;nbsp; &amp;nbsp; &amp;nbsp; ← new changes = new file&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Example migration file (&lt;/SPAN&gt;&lt;SPAN&gt;001_create_base_schemas.sql&lt;/SPAN&gt;&lt;SPAN&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;CREATE SCHEMA IF NOT EXISTS ${catalog}.analytics;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;CREATE SCHEMA IF NOT EXISTS ${catalog}.admin;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Example seed file (&lt;/SPAN&gt;&lt;SPAN&gt;003_seed_reference_data.sql&lt;/SPAN&gt;&lt;SPAN&gt;) — use MERGE for idempotency:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;MERGE INTO ${catalog}.analytics.order_status AS target&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;USING (&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;SELECT * FROM VALUES&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;('NEW', 'New Order'),&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;('SHIPPED', 'Order Shipped'),&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;('DELIVERED', 'Order Delivered')&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;AS source(code, description)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;) AS source&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;ON target.code = source.code&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;WHEN NOT MATCHED THEN INSERT *&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;How It Works in Practice&lt;/SPAN&gt;&lt;/H2&gt;
&lt;P&gt;&lt;STRONG&gt;Adding a new migration:&lt;/STRONG&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Create &lt;/SPAN&gt;&lt;SPAN&gt;005_add_customer_email.sql&lt;/SPAN&gt;&lt;SPAN&gt; in &lt;/SPAN&gt;&lt;SPAN&gt;migrations/sql/&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;Commit and push&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;databricks bundle deploy -e test&lt;/SPAN&gt;&lt;SPAN&gt; → runs the migration job → runner sees 005 is not in history → applies it&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;databricks bundle deploy -e prod&lt;/SPAN&gt;&lt;SPAN&gt; → same thing for prod&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&lt;STRONG&gt;The runner is safe to re-run&lt;/STRONG&gt;&lt;SPAN&gt; — it always checks the history table first. Already-applied migrations are skipped.&lt;/SPAN&gt;&lt;/P&gt;
&lt;H2&gt;&lt;SPAN&gt;To Answer Your Specific Questions&lt;/SPAN&gt;&lt;/H2&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Version = the prefix&lt;/STRONG&gt;&lt;SPAN&gt; (001, 002, etc.). Exactly right — never edit old migrations, always add new files.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;One migration job, not one task per file&lt;/STRONG&gt;&lt;SPAN&gt; — the single Python runner task handles all files. No need to edit YAML when adding migrations.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Version check is in the runner, not in each SQL file&lt;/STRONG&gt;&lt;SPAN&gt; — the runner reads the history table once, then only executes files whose version prefix isn't recorded yet.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;No need for Alembic&lt;/STRONG&gt;&lt;SPAN&gt; — this pattern gives you the same ordered, idempotent, environment-aware migrations without adding Python ORM complexity. Your migrations stay as plain SQL, which is easier for the whole team to work with.&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;</description>
      <pubDate>Thu, 26 Mar 2026 17:04:29 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/152202#M53785</guid>
      <dc:creator>anuj_lathi</dc:creator>
      <dc:date>2026-03-26T17:04:29Z</dc:date>
    </item>
    <item>
      <title>Re: SQL schemas migration</title>
      <link>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/152344#M53811</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/182781"&gt;@anuj_lathi&lt;/a&gt;&amp;nbsp;and&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34815"&gt;@Louis_Frolio&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;thank you very much! This is really great approach and example!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 27 Mar 2026 18:59:01 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/sql-schemas-migration/m-p/152344#M53811</guid>
      <dc:creator>maikel</dc:creator>
      <dc:date>2026-03-27T18:59:01Z</dc:date>
    </item>
  </channel>
</rss>

