Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
I have a usecase where I want to set the dlt pipeline id in the configuration parameters of that dlt pipeline.
The way we can use workspace ids or task id in notebook task task_id = {{task.id}}/ {{task.name}} and can save them as parameters and can call later as dbutils.widgets.get("task_id").
Can we do something similar in dlt pipeline.
like dlt_pipeline_id = {{dlt_pipeline.name}} in the configurations and then use it spark.conf.get("dlt_pipeline_id").
The answer is yes, you can achieve this in a DLT (Delta Live Tables) pipeline. Here are a few ways to do it:
Method 1: Using{{dlt_pipeline.name}}in the configuration
You can use the{{dlt_pipeline.name}}syntax in your DLT pipeline configuration, just like you would in a notebook task. This will replace the placeholder with the actual name of the DLT pipeline.
In your DLT pipeline configuration, add a parameter like this: