Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

"Run now with different parameters" - different parameters not recognized by jobs involving multiple tasks

New Contributor III

I'm running a databricks job involving multiple tasks and would like to run the job with different set of task parameters. I can achieve that by edit each task and and change the parameter values. However, it gets very manual when I have a lot of tasks. I'm think of using "run now with different parameters" option and pass in a different job JSON file. However, it looks like no matter what pass into the JSON file at the job level, my tasks still run with the old parameter values.

Does "run now with different parameters" option support changing parameter values for several tasks at once?


Not applicable

Hello, @Shirly Wang​! My name is Piper, and I'm a moderator for the Databricks community. Welcome to the community and thank you for your question! It's nice to meet you. Let's give the members of the community a chance to respond before we return to your question.

Thanks for your patience.

Community Manager
Community Manager

hi @Shirly Wang​ , To Run a job with different parameters,

You can use Run Now with Different Parameters to re-run a job with different parameters or different values for existing parameters.

  1. Click next to Run Now and select Run Now with Different Parameters or, in the Active Runs table, click Run Now with Different Parameters. Enter the new parameters depending on the type of task.
  • Notebook: You can enter parameters as key-value pairs or a JSON object. You can use this dialogue to set the values of widgets.
    • JAR and spark-submit: You can enter a list of parameters or a JSON document. The provided parameters are merged with the default parameters for the triggered run. If you delete keys, the default parameters are used. You can also add task parameter variables for the run.

2. Click Run.


New Contributor II

This is an issue for me too. I am using a multi-stage job calling the same notebook with different values for the parameters. This feels like a bug to me.

Hi @Alastair Reilly​ , I was checking to see if my suggestions helped you.

Or else, If you have any solution, please share it with the community as it can be helpful to others.

Also, Please don't forget to click on the "Select As Best" button whenever the information provided helps resolve your question.

New Contributor II


I am also facing with the same issue. The problem is described below:

  1. I have a multi-task job. This job consists of multiple "spark_python_task" kind tasks that execute a python script in a spark cluster. This pipeline is created within a CI/CD process and all tasks have their default parameters.
  2. When I want to run this job via Jobs UI by clicking "Run now with different parameters" button, I can see that I can specify only one single parameter set for "spark_python_task". I would like to be able to pass arguments to tasks separately. And when I pass parameters, I am observing that these parameters are passed to all tasks.
  3. Even though it is stated that parameters are merged with the default parameters, I am observing that my parameters are overridden. (reference)

Apart from Jobs UI, I also suffer from the same problem when I want to trigger the job using API (reference).

In short, how can I pass different parameters to tasks and get those parameters merged instead of being overridden in a multi-task environment?

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!