cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

CLuster Config

Naren1
New Contributor

Hi, can we pass a parameter into job activity from ADF side to change the environment inside the job cluster configuration?

1 ACCEPTED SOLUTION

Accepted Solutions

K_Anudeep
Databricks Employee
Databricks Employee

Hello @Naren1 ,

Yes โ€” you can pass parameters from ADF to a Databricks Job run, but you generally canโ€™t use those parameters to change the job cluster configuration (node type, Spark version, autoscale, init scripts, etc.) for that run.
In an ADF Databricks Job activity, the supported runtime customization is jobParameters (key-value pairs) that get passed into the job run. Doc: https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-ac.... With the Job activity, ADF would run an existing Databricks jobId, optionally with jobParameters.

Would you please help me understand what do you mean by environment -- different libraries? different Spark version? different node size? 

 

Anudeep

View solution in original post

1 REPLY 1

K_Anudeep
Databricks Employee
Databricks Employee

Hello @Naren1 ,

Yes โ€” you can pass parameters from ADF to a Databricks Job run, but you generally canโ€™t use those parameters to change the job cluster configuration (node type, Spark version, autoscale, init scripts, etc.) for that run.
In an ADF Databricks Job activity, the supported runtime customization is jobParameters (key-value pairs) that get passed into the job run. Doc: https://learn.microsoft.com/en-us/azure/data-factory/transform-data-databricks-job#databricks-job-ac.... With the Job activity, ADF would run an existing Databricks jobId, optionally with jobParameters.

Would you please help me understand what do you mean by environment -- different libraries? different Spark version? different node size? 

 

Anudeep