Hello @yit337 !
I don't think there is a job level max_concurrent_task_runs setting for normal DAG tasks.
But there are 2 different concepts:
1. you can limit concurrent runs of the same job
resources:
jobs:
my_job:
name: my_job
max_concurrent_runs: 1here you can limit how many runs of the same job can overlap. It does not limit how many tasks run in parallel inside one job run. https://docs.databricks.com/aws/en/jobs/configure-job
2. you can limit parallel iterations of a repeated task and use a For each task and set concurrency:
tasks:
- task_key: process_items
for_each_task:
inputs: '["A", "B", "C", "D"]'
concurrency: 2
task:
task_key: process_one_item
notebook_task:
notebook_path: ../src/process_one_item.py https://docs.databricks.com/aws/en/dev-tools/bundles/job-task-types
For normal separate tasks in the same job, concurrency is controlled by the DAG since tasks without dependencies can run in parallel and tasks with depends_on run after their dependencies. So to limit parallelism, you can group tasks into waves using dependencies. https://docs.databricks.com/aws/en/jobs/control-flow
If this answer resolves your question, could you please mark it as โAccept as Solutionโ? It will help other users quickly find the correct fix.
Senior BI/Data Engineer | Microsoft MVP Data Platform | Microsoft MVP Power BI | Power BI Super User | C# Corner MVP