cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DLT PARAMETERIZATION FROM JOBS PARAMETERS

hidden
New Contributor II

I have created a dlt pipeline notebook which creates tables based on a config file that has the configuration of the tables that need to be created . now what i want is i want to run my pipeline every 30 min for 4 tables from config and every 3 hours for the remaining tables from the config . how can i do that . i want to use the same pipeline only but i can create two jobs for that . can someone tell me how do i pass the parameters(tables_to_run) from jobs to dlt pipelines 

 

1 REPLY 1

Coffee77
Contributor III

Define "parameters" in job as usual and then, try to capture them in DLT by using similar code to this one:

dlt.conf.get("PARAMETER_NAME", "PARAMETER_DEFAULT_VALUE")

It should get parameter values from job if value exists, otherwise it'll set the default value

 


Lifelong Learner Cloud & Data Solution Architect | https://www.youtube.com/@CafeConData

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now