Hi @david_btmpl
When you set up a Databricks workflow using for_each_task with a cluster pool (instance_pool_id), Databricks will, by default, reuse the same cluster for all concurrent tasks in that job. So even if youโve set a higher concurrency (like M > 1), all those tasks will still run on a single shared cluster.
If your goal is to have M separate clusters running at the same time, youโll need to configure each task (or job) with its own new_cluster block, all pointing to the same instance pool. This approach gives you the cluster-level concurrency youโre looking for, while still benefiting from the reduced startup time that pools provide.