cancel
Showing results for 
Search instead for 
Did you mean: 
Warehousing & Analytics
cancel
Showing results for 
Search instead for 
Did you mean: 

SQL Warehouse - increasing concurrent queries limit

as5
New Contributor

Hello everyone,

I would like to inquire about the possibility of increasing the default limit of concurrent queries on the cluster which is set to 10.

While researching this topic, I noticed that there is no official documentation available regarding the configuration of the concurrent queries limit for SQL Warehouse clusters.

However, I did come across information about the spark.databricks.maxConcurrentRuns and spark.databricks.maxConcurrentQueries parameter on the forums.

Unfortunately, there are no specific instructions on how to configure this parameter or where to set it.

I was wondering if any of you have experience or knowledge in increasing the concurrent queries limit for SQL warehouse clusters in Databricks. If so, could you please provide guidance on how to accomplish this? Especially where exactly such command should be set up.

Any insights, tips, or documentation references would be greatly appreciated.

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @as5Let’s delve into the details of increasing the concurrent queries limit for SQL warehouse clusters in Databricks.

  1. Default Concurrent Queries Limit: By default, Databricks limits the number of concurrent queries per cluster assigned to a SQL warehouse based on the cost to compute their results. Specifically, each cluster is recommended for handling approximately 10 concurrent queries12.

  2. Adjusting the Limit: To increase this limit, you can adjust the spark.databricks.maxConcurrentRuns configuration parameter.

  3. Here’s how you can do it:

    • Per-Cluster Basis: Modify the cluster configuration directly. When creating or modifying a cluster, look for the option to set this parameter. You can specify a higher value to allow more concurrent queries for that specific cluster.

    • Global Level (Cluster Policy): Alternatively, you can set this parameter globally by modifying the cluster policy. This change will affect all clusters associated with that policy.

  4. Where to Set the Parameter: When creating or modifying a cluster, you’ll find an option to configure advanced settings. Look for the field where you can input custom Spark configurations. In that field, add the following line:

    spark.databricks.maxConcurrentRuns <desired_value>
    

    Replace <desired_value> with the maximum number of concurrent queries you want to allow. For example, if you want to allow 20 concurrent queries, set it as follows:

    spark.databricks.maxConcurrentRuns 20
    
  5. Additional Considerations:

  6. Monitoring and Optimization:

    • Use the Databricks monitoring page to track peak query counts.
    • Check query history and profiles to identify areas for optimization.
    • Consider the trade-off between query latency, cost, and cluster size.
  7. Documentation References:

Remember that adjusting these parameters should be done thoughtfully, considering both performance and cost implications. Happy querying! 🚀🔍

 
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.