Triggering DLT Pipelines with Dynamic Parameters
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2025 04:37 AM
Hi Team,
We have a scenario where we need to pass a dynamic parameter to a Spark job that will trigger a DLT pipeline in append mode. Can you please suggest an approach for this?
Regards,
Phani
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2025 06:55 PM
Hi @Phani1
DLT pipeline only support static parameters that we can define in the pipeline configuration. Would you elaborate your scenario? What parameters do you want to set dynamically?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-04-2025 12:34 AM
I want to trigger a Delta Live Tables (DLT) pipeline from a Databricks Job and pass a dynamic input parameter to apply a filter. However, it seems that pipeline settings can only be defined when creating the pipeline, and not when executing it. Is there a way to pass a dynamic value to the pipeline each time it's run?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-05-2025 01:25 AM
Thanks for adding more details. IMHO, DLT pipelines are not designed to change it's behavior based on a dynamic value. It's more for keep doing the same thing over and over, from the last execution point incrementally. Stateful data processing.
Please let me try to imagine a possible situation. Let's say I have 3 different data sources, but the data ingestion and processing are nearly identical. So I'd like to call the same DLT pipeline 3 times from a workflow job, by passing a dynamic parameter pointing to different source locations, to reuse the same implementation.
In that case, I'd just write a DLT pipeline definition in a notebook. Create 3 DLT pipelines using DLT parameters to specify different source locations. Then execute the pipelines form a job.
Also, if you have a lot of ingestion routes and want to mass produce pipelines, Python meta programing approach may be helpful.
I hope I understand your point correctly.

