<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Triggering DLT Pipelines with Dynamic Parameters in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/118858#M45729</link>
    <description>&lt;P&gt;Workflow jobs implementing DLT pipelines seem to work differently than other jobs (notebook, dbt, etc) in terms of parameters...&lt;/P&gt;&lt;P&gt;For notebooks, job parameters are pushed down to the notebook parameters (widgets), overwriting the parameter defaults. A common scenario is persisting the job run ID in a column in the tables for records that are inserted or updated. This is a need I have.&lt;/P&gt;&lt;P&gt;Generally, pushing job parameters to DLT pipelines seems to be a common request (see &lt;A href="https://community.databricks.com/t5/data-engineering/can-i-pass-parameters-to-a-delta-live-table-pipeline-at-running/td-p/30440" target="_blank"&gt;Can I pass parameters to a Delta Live Table pipeli... - Databricks Community - 30440&lt;/A&gt;).&lt;/P&gt;&lt;P&gt;It is confusing and inconsistent that this cannot happen with DLT pipelines, i.e. jobs triggering the pipeline do not push parameters down to the DLT pipeline.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Are there work arounds?&lt;/P&gt;</description>
    <pubDate>Mon, 12 May 2025 08:52:36 GMT</pubDate>
    <dc:creator>jericksoncea</dc:creator>
    <dc:date>2025-05-12T08:52:36Z</dc:date>
    <item>
      <title>Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111581#M43940</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hi Team,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;We have a scenario where we need to pass a dynamic parameter to a Spark job that will trigger a DLT pipeline in append mode. Can you please suggest an approach for this?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Regards,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Phani&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Mar 2025 12:37:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111581#M43940</guid>
      <dc:creator>Phani1</dc:creator>
      <dc:date>2025-03-03T12:37:42Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111646#M43963</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/36892"&gt;@Phani1&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;DLT pipeline only support static parameters that we can define in the pipeline configuration. Would you elaborate your scenario? What parameters do you want to set dynamically?&lt;/P&gt;</description>
      <pubDate>Tue, 04 Mar 2025 02:55:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111646#M43963</guid>
      <dc:creator>koji_kawamura</dc:creator>
      <dc:date>2025-03-04T02:55:11Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111674#M43966</link>
      <description>&lt;P&gt;&lt;SPAN&gt;I want to trigger a Delta Live Tables (DLT) pipeline from a Databricks Job and pass a dynamic input parameter to apply a filter. However, it seems that pipeline settings can only be defined when creating the pipeline, and not when executing it. Is there a way to pass a dynamic value to the pipeline each time it's run?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Mar 2025 08:34:18 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111674#M43966</guid>
      <dc:creator>Phani1</dc:creator>
      <dc:date>2025-03-04T08:34:18Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111804#M44002</link>
      <description>&lt;P&gt;Thanks for adding more details. IMHO, DLT pipelines are not designed to change it's behavior based on a dynamic value. It's more for keep doing the same thing over and over, from the last execution point incrementally. Stateful data processing.&lt;/P&gt;
&lt;P&gt;Please let me try to imagine a possible situation. Let's say I have 3 different data sources, but the data ingestion and processing are nearly identical. So I'd like to call the same DLT pipeline 3 times from a workflow job, by passing a dynamic parameter pointing to different source locations, to reuse the same implementation.&lt;/P&gt;
&lt;P&gt;In that case, I'd just write a DLT pipeline definition in a notebook. Create 3 DLT pipelines using DLT parameters to specify different source locations. Then execute the pipelines form a job.&lt;/P&gt;
&lt;P&gt;Also, if you have a lot of ingestion routes and want to mass produce pipelines, &lt;A href="https://medium.com/@imran.akbar1995/metaprogramming-databricks-generating-delta-live-tables-dlt-dynamically-1fa8f78951eb" target="_blank"&gt;Python meta programing approach&lt;/A&gt; may be helpful.&lt;/P&gt;
&lt;P&gt;I hope I understand your point correctly.&lt;/P&gt;</description>
      <pubDate>Wed, 05 Mar 2025 09:25:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/111804#M44002</guid>
      <dc:creator>koji_kawamura</dc:creator>
      <dc:date>2025-03-05T09:25:54Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/118858#M45729</link>
      <description>&lt;P&gt;Workflow jobs implementing DLT pipelines seem to work differently than other jobs (notebook, dbt, etc) in terms of parameters...&lt;/P&gt;&lt;P&gt;For notebooks, job parameters are pushed down to the notebook parameters (widgets), overwriting the parameter defaults. A common scenario is persisting the job run ID in a column in the tables for records that are inserted or updated. This is a need I have.&lt;/P&gt;&lt;P&gt;Generally, pushing job parameters to DLT pipelines seems to be a common request (see &lt;A href="https://community.databricks.com/t5/data-engineering/can-i-pass-parameters-to-a-delta-live-table-pipeline-at-running/td-p/30440" target="_blank"&gt;Can I pass parameters to a Delta Live Table pipeli... - Databricks Community - 30440&lt;/A&gt;).&lt;/P&gt;&lt;P&gt;It is confusing and inconsistent that this cannot happen with DLT pipelines, i.e. jobs triggering the pipeline do not push parameters down to the DLT pipeline.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Are there work arounds?&lt;/P&gt;</description>
      <pubDate>Mon, 12 May 2025 08:52:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/118858#M45729</guid>
      <dc:creator>jericksoncea</dc:creator>
      <dc:date>2025-05-12T08:52:36Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/119936#M45998</link>
      <description>&lt;P&gt;I ended up using the Databricks SDK pipelines update before each run (a hack), to update the configuration.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I think the DLT side of Databricks is its own world; the jobs and repository configuration works differently than other features, and it has only just been fully integrated with unity catalog (&lt;A href="https://www.databricks.com/blog/2025-dlt-update-intelligent-fully-governed-data-pipelines" target="_blank"&gt;2025 DLT Update: Intelligent, fully governed data pipelines | Databricks Blog&lt;/A&gt;). It would be nice for some consistency...&lt;/P&gt;</description>
      <pubDate>Thu, 22 May 2025 07:26:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/119936#M45998</guid>
      <dc:creator>jericksoncea</dc:creator>
      <dc:date>2025-05-22T07:26:50Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/131314#M49040</link>
      <description>&lt;P&gt;found a working example -&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;databricks pipelines update &amp;lt;pipeline_id&amp;gt; --json &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/34570"&gt;@new&lt;/a&gt;_config.json&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;databricks pipelines start-update &amp;lt;Pipelineid&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;SPAN class=""&gt;where in use JSON for passing parameters.. every run update the &lt;/SPAN&gt;&lt;SPAN class=""&gt;parameters with new json file&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 08 Sep 2025 22:54:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/131314#M49040</guid>
      <dc:creator>sas30</dc:creator>
      <dc:date>2025-09-08T22:54:56Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/141849#M51829</link>
      <description>&lt;P&gt;can you provide some more details. Not really getting your answer...&lt;/P&gt;</description>
      <pubDate>Mon, 15 Dec 2025 10:36:06 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/141849#M51829</guid>
      <dc:creator>bombercorny</dc:creator>
      <dc:date>2025-12-15T10:36:06Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/143535#M52197</link>
      <description>&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/90461"&gt;@koji_kawamura&lt;/a&gt;&amp;nbsp;: I have more or less the same scenario say I have 3 tables.&lt;/P&gt;&lt;P&gt;The sources and targets are different but I would like to use a generic pipeline and pass in the source and target as a parameter and run them parallely.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/183518"&gt;@sas30&lt;/a&gt;&amp;nbsp;: can you be more elaborate&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/115306"&gt;@bombercorny&lt;/a&gt;&amp;nbsp; : got any info on this??&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 09 Jan 2026 20:54:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/143535#M52197</guid>
      <dc:creator>Sudharsan</dc:creator>
      <dc:date>2026-01-09T20:54:24Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/146417#M52645</link>
      <description>&lt;P&gt;If you’re looking to build a dynamic, configuration-driven DLT pipeline, a better approach is to use a configuration table. This table should include fields such as table_name, pipeline_name, table_properties, and other relevant settings. Your notebook can then query this table, applying filters for the table and pipeline names that are passed dynamically through variables. The resolved properties can then be accessed directly within your code.&lt;/P&gt;&lt;P&gt;You can always update the parameters and have things dynamic by updating this table .&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 01 Feb 2026 14:11:31 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/146417#M52645</guid>
      <dc:creator>pradeep_singh</dc:creator>
      <dc:date>2026-02-01T14:11:31Z</dc:date>
    </item>
    <item>
      <title>Re: Triggering DLT Pipelines with Dynamic Parameters</title>
      <link>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/146419#M52647</link>
      <description>&lt;P&gt;In your config table you can also set an table_active_status to set it to Y or N . If you want that tables in that pipeline set it to Y if you want to disable it for any reason use N . The code for that specific pipeline only run if its status is active .&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 01 Feb 2026 14:33:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/triggering-dlt-pipelines-with-dynamic-parameters/m-p/146419#M52647</guid>
      <dc:creator>pradeep_singh</dc:creator>
      <dc:date>2026-02-01T14:33:28Z</dc:date>
    </item>
  </channel>
</rss>

