Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
I have a usecase where I want to set the dlt pipeline id in the configuration parameters of that dlt pipeline.
The way we can use workspace ids or task id in notebook task task_id = {{task.id}}/ {{task.name}} and can save them as parameters and can call later as dbutils.widgets.get("task_id").
Can we do something similar in dlt pipeline.
like dlt_pipeline_id = {{dlt_pipeline.name}} in the configurations and then use it spark.conf.get("dlt_pipeline_id").
The answer is yes, you can achieve this in a DLT (Delta Live Tables) pipeline. Here are a few ways to do it:
Method 1: Using{{dlt_pipeline.name}}in the configuration
You can use the{{dlt_pipeline.name}}syntax in your DLT pipeline configuration, just like you would in a notebook task. This will replace the placeholder with the actual name of the DLT pipeline.
In your DLT pipeline configuration, add a parameter like this:
Hi @mourakshit , I tried all the three methods you mentioned . None of them worked
method_1 returne pipeline_name or pipelin_id as printed value : {{dlt_pipeline.name}} {{dlt_pipeline.id}} not the actual values
methond_2 returned not conf like spark.databricks.pipeline.name exists method_3 same error like method_1
It would be really helpfull if you can show a demo example or screenshot by using any of the methods will be really helpful.
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.