cancel
Showing results for 
Search instead for 
Did you mean: 
Community Discussions
cancel
Showing results for 
Search instead for 
Did you mean: 

Parameterizing DLT Pipelines

Mustafa_Kamal
New Contributor II

Hi Everyone,

I have DLTP pipeline which I need to execute for difference source systems. Need advise on how to parametrize this.

I have gone through many articles on the web, but it seems there is no accurate information available.

Can anyone please help.

Thanks,

4 REPLIES 4

AmanSehgal
Honored Contributor III

You can provide parameters in the configuration section of DLT pipeline and access it in your code using spark.conf.get(<parameter_name>).

Parameterize DLT pipelines 

Mustafa_Kamal
New Contributor II

Thank you @AmanSehgal ,

I have done that and was able to execute the pipeline successfully. Bu t I need to change the parameter value at run time, so that the same pipeline can be used for multiple sources.

Can we pass parameters from Job to DLT Pipeline ?

 

AmanSehgal
Honored Contributor III

I tried but it doesn't seem to be working. I tried passing job parameters to cluster configuration as {{job.paramters.name}} but it didn't pickup.

You might have to put a wrapper logic around your pipeline for now to parameterize it. For instance, write some text into a file in mounted location and then read it in the DLT pipeline for your conditional logic.

Hi @AmanSehgal , Yes, thats true, I had tried different options here. Thank you so much for taking it time to investigate this.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.