Hey @harshgrewal27
Before we start there a concept called“container reuse window” that often explains why some jobs start instantly and others queue."
- Queueing is not controlled by Environment version (Env 3 / Env 4).
Queueing in Databricks Workflows mainly happens due to:- Job max concurrent runs
- Workspace concurrency limits
- Serverless compute capacity scheduling( Optional as it omits most execution because of no custom capacity orchestration for Prod Jobs)
If many runs start at once (10–20 or may be longer 30 min), some runs may enter the scheduler queue until compute becomes available.
2. Why Env 3 showed queue but Env 4 didn’t
Most likely behavior:
- Env 3 + Performance Optimized
- Jobs execute faster
- Each run requests higher compute resources(32GB-Max for High Compute)
- Serverless pool may temporarily run out of slots ( Experienced a lot so pointing this out)
→ Some runs wait in queue
3. What Databricks documentation says
Docs mention that job runs queue when concurrency or compute capacity limits are reached, not based on environment version.
Relevant areas in documentation:
Job concurrency settings
- Serverless compute capacity allocation
Link:- https://docs.databricks.com/aws/en/release-notes/serverless/environment-version?
IMP Point:- concurrent run limits or scheduling behavior. There’s no explicit note that Env‑3 can “queue” more conservatively than Env‑4 or vice versa (the differences are mostly library/runtime features)
Databricks Solution Architect