Thanks for adding more details. IMHO, DLT pipelines are not designed to change it's behavior based on a dynamic value. It's more for keep doing the same thing over and over, from the last execution point incrementally. Stateful data processing.
Please let me try to imagine a possible situation. Let's say I have 3 different data sources, but the data ingestion and processing are nearly identical. So I'd like to call the same DLT pipeline 3 times from a workflow job, by passing a dynamic parameter pointing to different source locations, to reuse the same implementation.
In that case, I'd just write a DLT pipeline definition in a notebook. Create 3 DLT pipelines using DLT parameters to specify different source locations. Then execute the pipelines form a job.
Also, if you have a lot of ingestion routes and want to mass produce pipelines, Python meta programing approach may be helpful.
I hope I understand your point correctly.