Hi @arielmoraes , It's difficult to say definitively if there's a bug in the queueing mechanism.
However, there are a few things you could check:
1. **Cluster resources**: Ensure that your cluster has enough resources to run the jobs concurrently. If resources are insufficient, jobs may be queued and not run concurrently as expected.
2. **Job configuration**: Check the configuration of your jobs. Make sure that the "Maximum concurrent runs" is set to the desired number.
As per the information provided, for Structured Streaming jobs, it is recommended to set "Maximum concurrent runs" to 1, but in your case, you might need to set it to a higher value (like 10 as you mentioned).
3. **Job retries**: If a job fails, it might be retried based on your job configuration, which could affect the number of concurrent running jobs.
If you've checked these aspects and the problem persists, it might be worth reaching out to Databricks support by filing a support ticket for further assistance.