cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to monitor serverless compute usage in real time

Carson
New Contributor II

Hello, 

I'm using Databricks Connect to connect a dash app to my Databricks account. My use case is similar to this example: https://github.com/databricks-demos/dbconnect-examples/tree/main/python/Plotly

I've been able to get everything configured and working properly with serverless compute. However I would like to be able to monitor how the dash app utilizes the serverless compute in real time. I'm aware of the ability to query the billing system table but would like to get more up to date information. 

For example, with classic compute I am able to pull up the compute tab in the Databricks UI and easily see what is running, current dbus/hr, active memory, active cores, etc. Is there any way to see similar information for the serverless compute? Thanks!

1 REPLY 1

mark_ott
Databricks Employee
Databricks Employee

There is currently no direct, real-time equivalent in the Databricks UI’s “Compute” tab for monitoring serverless (SQL serverless or Data Engineering serverless) compute usage in the same way as classic clusters, where you see live memory, DBU/hr, and active cores for each workload. The ability to monitor job or query workload details for serverless compute is more limited, but some alternatives exist.

Monitoring Serverless Compute in Databricks

  • Serverless SQL endpoints:
    The “SQL Warehouses” tab (formerly “Endpoints”) in Databricks UI allows you to monitor basic utilization metrics for SQL serverless endpoints (such as running queries, query history, and resource usage, including peak and active concurrency). You can see which users are connected, query duration, and individual query resource consumption in the Query History, but not real-time memory or core metrics similar to classic compute.

  • Dashboards and Alerts:
    Databricks automatically collects query and resource statistics in the system tables (information_schema), and you can build dashboards or alerts in the SQL Editor to visualize workload trends, but even these have a small lag (typically minutes).

  • REST API:
    The Databricks REST API provides endpoints for querying SQL warehouse status and recent queries, so with periodic polling, you can build a custom dashboard that approximates some level of near-real-time monitoring. However, live DBU/hr and resource breakdown per request are generally not exposed for serverless workloads.

Comparison Table

Feature Classic Compute UI Serverless Compute UI
Real-time DBU/hr (live) Yes No
Active memory/cores Yes No
Active jobs/queries Yes Limited (SQL queries only)
Detailed user/session details Yes Partial (SQL warehouse only)
REST API support Yes Partial (status, query history)
 
 

Recommended Approach

  • Use the SQL Warehouses tab and Query History to monitor running queries for serverless SQL.

  • Pull recent resource usage from system tables and REST APIs; visualize it in a dashboard (e.g., via dash/Plotly).

  • Recognize that true “live cluster” metrics (core count, live memory, DBU/hr) are not natively available for serverless workloads; updates are near-real-time but not instant.

Key Limitations

  • Serverless compute is managed by Databricks and abstracts away cluster details, so resource allocation, scaling, and billing are reported after tasks complete rather than continuously.

  • For production monitoring, periodic polling of system tables or API, combined with Query History, is the closest option available as of late 2025.

If more fine-grained metrics are essential, consider Databricks classic compute, or file a feature request with Databricks support for enhanced serverless workload visibility.