cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to manage shared parameters between different jobs?

flourishingsing
New Contributor

I have few jobs that run serverless. Currently there is only one task for each job. These tasks have multiple parameters, some of which are shared between the jobs, some are unique to each jobs.

Is it possible to define all the parameters for all the jobs in a centralised location and then reference it for each task?

job.yml:

tasks:
  - task_key: my_key
    spark_python_task:
      python_file:my_path
      parameters: ${var.default_serverless_parameters}
    environment_key: my_environment_key

 

serverless_parameters.yml

variables:
  default_serverless_parameters:
    description: "My list of parameters"
    type: complex
    default:
      - --param1
      - ${value1}
      - --param2
      - ${value2}

 

databricks bundle validate accepts the above but when I try to run it, I get the following error:

TypeError: spark_python_task.parameters must be a list

 

1 ACCEPTED SOLUTION

Accepted Solutions

Pat
Esteemed Contributor

It's weird, I have tried your definition and it works well when I repleace ${value1} and ${value2}. Not sure how you are using those, you should prefix with var. or use some static. values, like I did.

Pat_0-1771196490262.png

 

View solution in original post

1 REPLY 1

Pat
Esteemed Contributor

It's weird, I have tried your definition and it works well when I repleace ${value1} and ${value2}. Not sure how you are using those, you should prefix with var. or use some static. values, like I did.

Pat_0-1771196490262.png