cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

DBSQL MCP Server - how to specify compute cluster?

rdruska
New Contributor II

Hi,

the DBSQL MCP Server is really cool, however, I am not sure how to connect it to a specific cluster, and I could not find any information in any documentation page. My MCP settings looks like this:

"databricks-sql-mcp": {
"type": "streamable-http",
"url": "https://{DATABRICKS_URL}.azuredatabricks.net/api/2.0/mcp/sql",
"headers": {
"Authorization": "Bearer {TOKEN}
},
"note": "Databricks SQL MCP"
}

I tried providing warehouse_id query parameter or even prompting the model to use a specific compute cluster, but it did not help - the commands always run on a specific cluster, and I am not sure why this specific cluster was selected. Did anyone have the same problem and was able to solve that?

3 REPLIES 3

Ashwin_DSA
Databricks Employee
Databricks Employee

Hi @rdruska,

You are right. The behaviour is a bit subtle and not well-documented yet. Having checked internally, here is what I have found. As of today, the DBSQL MCP server will, by default, pick a "random running" SQL warehouse from the set of warehouses that your token has access to. 

However, there is a workaround for this... if you pass it via the MCP requestโ€™s _meta field, not via URL parameters or prompt text as you are currently doing. 

If youโ€™re calling the endpoint directly, you can pin the warehouse like this in the JSONโ€‘RPC request body... If you do so, the DBSQL MCP server will use exactly the warehouse whose warehouse_id you provide.

{
  "jsonrpc": "2.0",
  "id": "1",
  "method": "tools/call",
  "params": {
    "name": "execute_sql",
    "arguments": {
      "query": "SELECT 1"
    },
    "_meta": {
      "warehouse_id": "<your-sql-warehouse-id>"
    }
  }
}

 Does this help?

Just so that you are aware, adding ?warehouse_id=... to the URL is not supported. The server ignores that. Also, many MCP clients (e.g. some IDE integrations) donโ€™t yet expose a way to set _meta, so from those tools you currently canโ€™t override the warehouse. Youโ€™ll see the default random running warehouse behaviour. Appreicate it is not ideal but this is how it woks as of today. 

If you share which MCP client youโ€™re using (Cursor, Claude Code, VS Code extension, etc.), we can look at whether thereโ€™s a practical workaround (for example, calling MCP directly via HTTP or using a small proxy that injects _meta.warehouse_id for you).

If this answer resolves your question, could you mark it as โ€œAccept as Solutionโ€? That helps other users quickly find the correct fix.

Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***

rdruska
New Contributor II

Hi, thanks a lot for your reply! I use Cursor, is there a workaround for implementing the `_meta` argument there?

Ashwin_DSA
Databricks Employee
Databricks Employee

Hi @rdruska,

Thanks for clarifying. Unfortunately, Cursor doesnโ€™t expose a way to set the MCP _meta field, so there isnโ€™t a clean, builtโ€‘in way to pass warehouse_id from mcp.json. The MCP spec supports _meta, and the DBSQL MCP server honors _meta.warehouse_id, but Cursorโ€™s current UI/config doesnโ€™t let you inject it per request, so the server falls back to "pick a random running warehouse you have access to."

If you only have one warehouse running (and others stopped) when you use Cursor, the DBSQL MCP serverโ€™s "random running warehouse" behaviour effectively becomes deterministic. This is the simplest option, though obviously not ideal if you normally keep multiple warehouses up.

Alternatively, if you are interested in explring advanced options, consider adding a small proxy MCP server that injects _meta. This basically means implementing a tiny MCP server (for example, as a Databricks App or a small service) that accepts MCP requests from Cursor and forwards them to https://&lt;workspace-host&gt;/api/2.0/mcp/sql

When doing so, you can inject _meta argument into the JSONโ€‘RPC body. If you then point cursor at this proxy MCP instead of the DBSQL MCP URL directly, you should be able to achieve what you are after. This approach may sound engineering-heavy, but it will give you more control over the warehouse you would like to use.

Because this limitation is on the client side, the longโ€‘term fix would need Cursor to support _meta in their MCP configuration. If this is affecting your org, Iโ€™d recommend opening a feature request with Cursor.

Hope this helps.

If this answer resolves your question, could you mark it as โ€œAccept as Solutionโ€? That helps other users quickly find the correct fix.

Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***