<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Delta table update in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154225#M54074</link>
    <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/182781"&gt;@anuj_lathi&lt;/a&gt;&amp;nbsp; for the Detailed explanation. This helps a lot .&lt;/P&gt;</description>
    <pubDate>Sun, 12 Apr 2026 20:37:57 GMT</pubDate>
    <dc:creator>databrciks</dc:creator>
    <dc:date>2026-04-12T20:37:57Z</dc:date>
    <item>
      <title>Delta table update</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154094#M54066</link>
      <description>&lt;DIV&gt;Hi Experts&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;I have around 100 table in the bronze layer (DLT pipeline). We have created silver layer based on some logic around 20 silver layer tables.&lt;/DIV&gt;&lt;DIV&gt;How to run the specific pipeline in silver layer when ever there is some update happens in the bronze layer.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;Say for ex if i am using t1,t2,t3 then trigger silver pipeline using these tables.&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;if i am using t5,t11,t13 then trigger silver pipeline using these tables.&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;So there could be 50 table updates in bronze layer and i need to trigger only those flows in silver layer where 50 tables being used(could be 10 flows in the silver layer)&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;how to design this flow . Please advise.&lt;/DIV&gt;</description>
      <pubDate>Fri, 10 Apr 2026 18:47:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154094#M54066</guid>
      <dc:creator>databrciks</dc:creator>
      <dc:date>2026-04-10T18:47:15Z</dc:date>
    </item>
    <item>
      <title>Re: Delta table update</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154101#M54067</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi — great question! This is a common pattern when you have a large medallion architecture with many bronze-to-silver dependencies. There are several approaches you can take, ranging from simple to more advanced.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;———&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 1: Single DLT Pipeline with Declarative Dependencies (Recommended)&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;The simplest and most elegant approach is to &lt;/SPAN&gt;&lt;STRONG&gt;define both your bronze and silver layers in the same DLT pipeline&lt;/STRONG&gt;&lt;SPAN&gt; (or use multiple pipelines with shared datasets). DLT is inherently declarative — if you define your silver tables as reading from bronze tables, &lt;/SPAN&gt;&lt;STRONG&gt;DLT automatically handles the dependency graph&lt;/STRONG&gt;&lt;SPAN&gt; and only processes what needs to be updated.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import dlt&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;from pyspark.sql.functions import *&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Bronze layer&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table(name="bronze_t1")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def bronze_t1():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return spark.readStream.format("cloudFiles").load("/data/t1/")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table(name="bronze_t2")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def bronze_t2():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return spark.readStream.format("cloudFiles").load("/data/t2/")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Silver layer — DLT knows this depends on bronze_t1 and bronze_t2&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/97035"&gt;@Dlt&lt;/a&gt;.table(name="silver_flow_1")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def silver_flow_1():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;t1 = dlt.read_stream("bronze_t1")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;t2 = dlt.read_stream("bronze_t2")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return t1.join(t2, "key_col").filter(col("status") == "active")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Why this works:&lt;/STRONG&gt;&lt;SPAN&gt; When you trigger the pipeline, DLT resolves the DAG. If bronze&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;t1 has new data, silver&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt;flow&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;1 will process it. If bronze&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt;t5 has no new data, any silver table depending only on bronze_t5 won't do unnecessary work — streaming tables will simply have no new records to process.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Tip:&lt;/STRONG&gt;&lt;SPAN&gt; Use &lt;/SPAN&gt;&lt;STRONG&gt;Triggered mode&lt;/STRONG&gt;&lt;SPAN&gt; (pipelines.trigger.interval = once) or &lt;/SPAN&gt;&lt;STRONG&gt;Continuous mode&lt;/STRONG&gt;&lt;SPAN&gt; depending on your latency needs.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;———&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 2: Separate Silver Pipelines Triggered via Databricks Workflows&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;If you need &lt;/SPAN&gt;&lt;STRONG&gt;separate DLT pipelines&lt;/STRONG&gt;&lt;SPAN&gt; per silver flow (for isolation, independent scheduling, or team ownership), you can orchestrate them using &lt;/SPAN&gt;&lt;STRONG&gt;Databricks Workflows with dependencies&lt;/STRONG&gt;&lt;SPAN&gt;:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1&lt;/STRONG&gt;&lt;SPAN&gt; — Create a Workflow where:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Task 1:&lt;/STRONG&gt;&lt;SPAN&gt; Bronze DLT pipeline (ingests all 100 tables)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Task 2a:&lt;/STRONG&gt;&lt;SPAN&gt; Silver Flow 1 DLT pipeline (depends on Task 1)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;STRONG&gt;Task 2b:&lt;/STRONG&gt;&lt;SPAN&gt; Silver Flow 2 DLT pipeline (depends on Task 1)&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI style="font-weight: 400;" aria-level="1"&gt;&lt;SPAN&gt;...and so on&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;This ensures silver pipelines run after bronze completes. However, all silver pipelines will run every time.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2&lt;/STRONG&gt;&lt;SPAN&gt; — To make it &lt;/SPAN&gt;&lt;STRONG&gt;selective&lt;/STRONG&gt;&lt;SPAN&gt; (only trigger silver flows whose source bronze tables changed), add a lightweight check:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# In a notebook task that runs before each silver pipeline&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def check_bronze_tables_changed(bronze_tables, since_timestamp):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"""Check if any of the specified bronze tables have new data since last run."""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for table in bronze_tables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;history = spark.sql(f"DESCRIBE HISTORY {table} LIMIT 5")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;latest_version = history.select("timestamp").first()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if latest_version and str(latest_version["timestamp"]) &amp;gt; since_timestamp:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return True&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return False&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Example: Silver Flow 1 depends on t1, t2, t3&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;tables_changed = check_bronze_tables_changed(&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;["catalog.bronze.t1", "catalog.bronze.t2", "catalog.bronze.t3"],&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;last_run_timestamp&amp;nbsp; # Track this in a control table or widget&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;if not tables_changed:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;dbutils.notebook.exit("SKIP - no upstream changes")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Then use the &lt;/SPAN&gt;&lt;STRONG&gt;If/else condition task&lt;/STRONG&gt;&lt;SPAN&gt; in Workflows (or use dbutils.notebook.exit() return values) to conditionally run or skip the downstream silver DLT pipeline.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;———&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Option 3: Event-Driven with Delta Change Data Feed + Lakeflow Jobs&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&lt;SPAN&gt;For a truly &lt;/SPAN&gt;&lt;STRONG&gt;event-driven&lt;/STRONG&gt;&lt;SPAN&gt; architecture:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 1 — Enable Change Data Feed (CDF)&lt;/STRONG&gt;&lt;SPAN&gt; on your bronze tables:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;ALTER TABLE catalog.bronze.t1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;SET TBLPROPERTIES (delta.enableChangeDataFeed = true);&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 2 — Use a Lakeflow/Workflow trigger&lt;/STRONG&gt;&lt;SPAN&gt; — configure a &lt;/SPAN&gt;&lt;STRONG&gt;File Arrival trigger&lt;/STRONG&gt;&lt;SPAN&gt; or use &lt;/SPAN&gt;&lt;STRONG&gt;Databricks Asset Bundles&lt;/STRONG&gt;&lt;SPAN&gt; to set up event-based triggers. When new files arrive in bronze source locations, only the relevant pipeline fires.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;Step 3 — Or use a dispatcher notebook&lt;/STRONG&gt;&lt;SPAN&gt; that reads the change data from bronze tables, maintains a mapping of bronze&lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;table to silver&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt;pipeline, and triggers only the relevant silver pipelines via the Jobs API:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;import requests&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Mapping: which silver pipeline depends on which bronze tables&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;DEPENDENCY_MAP = {&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"silver_pipeline_1_job_id": [&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"catalog.bronze.t1", "catalog.bronze.t2", "catalog.bronze.t3"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;],&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"silver_pipeline_2_job_id": [&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"catalog.bronze.t5", "catalog.bronze.t11", "catalog.bronze.t13"&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;],&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;}&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;def get_changed_tables(since_version):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;"""Identify which bronze tables have changed."""&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;changed = set()&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;for table in ALL_BRONZE_TABLES:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;try:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;changes = (spark.read.format("delta")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;.option("readChangeFeed", "true")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;.option("startingVersion", since_version)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;.table(table))&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if changes.count() &amp;gt; 0:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;changed.add(table)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;except Exception:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;pass&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;return changed&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;changed_tables = get_changed_tables(last_processed_version)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;# Trigger only relevant silver pipelines&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;for job_id, dependencies in DEPENDENCY_MAP.items():&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;if any(t in changed_tables for t in dependencies):&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;response = requests.post(&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;f"{host}/api/2.1/jobs/run-now",&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;headers={"Authorization": f"Bearer {token}"},&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;json={"job_id": int(job_id)}&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;)&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;print(f"Triggered job {job_id}: {response.status_code}")&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;———&lt;/SPAN&gt;&lt;/P&gt;
&lt;H3&gt;&lt;STRONG&gt;Summary and Recommendation&lt;/STRONG&gt;&lt;/H3&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Approach&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Complexity&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Best For&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Option 1: Single DLT Pipeline&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;Low&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;When all bronze+silver can live in one pipeline. DLT handles the DAG natively.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Option 2: Workflows + Conditional Tasks&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;Medium&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;When silver pipelines must be separate but you want a simple orchestration layer.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;Option 3: Event-Driven (CDF + Dispatcher)&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;Higher&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;SPAN&gt;When you need true event-driven, minimal-compute triggering at scale.&lt;/SPAN&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&lt;STRONG&gt;My recommendation:&lt;/STRONG&gt;&lt;SPAN&gt; Start with &lt;/SPAN&gt;&lt;STRONG&gt;Option 1&lt;/STRONG&gt;&lt;SPAN&gt; if possible. DLT's declarative model is built exactly for this use case — you define &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;what&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; each silver table reads from, and DLT figures out &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;when&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; and &lt;/SPAN&gt;&lt;I&gt;&lt;SPAN&gt;what&lt;/SPAN&gt;&lt;/I&gt;&lt;SPAN&gt; to process. If you need pipeline isolation for operational reasons, go with &lt;/SPAN&gt;&lt;STRONG&gt;Option 2&lt;/STRONG&gt;&lt;SPAN&gt; using DESCRIBE HISTORY checks, which is straightforward to implement and maintain.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Hope this helps! Let me know if you have questions about any of these approaches.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 11 Apr 2026 03:16:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154101#M54067</guid>
      <dc:creator>anuj_lathi</dc:creator>
      <dc:date>2026-04-11T03:16:56Z</dc:date>
    </item>
    <item>
      <title>Re: Delta table update</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154225#M54074</link>
      <description>&lt;P&gt;Thanks&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/182781"&gt;@anuj_lathi&lt;/a&gt;&amp;nbsp; for the Detailed explanation. This helps a lot .&lt;/P&gt;</description>
      <pubDate>Sun, 12 Apr 2026 20:37:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-table-update/m-p/154225#M54074</guid>
      <dc:creator>databrciks</dc:creator>
      <dc:date>2026-04-12T20:37:57Z</dc:date>
    </item>
  </channel>
</rss>

