cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Collecting Job Usage Metrics Without Unity Catalog

William_Scardua
Valued Contributor

hi,

I would like to request assistance on how to collect usage metrics and job execution data for my Databricks environment. We are currently not using Unity Catalog, but I would still like to monitor and analyze usage

Could you please provide guidance or documentation on how to retrieve this information without relying on Unity Catalog?

Any recommendations on APIs, system tables, audit logs, or best practices would be greatly appreciated.

1 REPLY 1

LRALVA
Honored Contributor

hi @William_Scardua 

Hereโ€™s a comprehensive overview of how to collect usage and jobโ€execution metrics in Databricks without Unity Catalog,
using REST APIs, audit logs, system tables, and built-in monitoring features.
In summary, you can retrieve:
1. Job and query history via the Query History API and Jobs API.
2. Cluster activity and performance via the Clusters API (events) and the computeโ€metrics UI (or Ganglia charts via REST)
3. Workspace-level audit events via Premium-tier audit logs delivered to JSON or system tables.
4. Delta Live Tables pipeline metrics via the DLT event log.
Below are detailed options, with links to documentation and best practices.

## 1. REST APIs for Jobs, Queries, and Clusters
1.1 Query History API (SQL Endpoints)
Use the Query History API 2.0 to list all SQL queries, their run times, and statuses. This works even if youโ€™re not using Unity Catalog.
Endpoint: GET /api/2.0/sql/history/queries
https://learn.microsoft.com/en-us/answers/questions/1180376/azure-databricks-how-to-get-usage-statis...
Usage: Filter by user, time range, or warehouse to gather per-user or per-table query metrics.
1.2 Jobs API (Databricks Jobs)
Retrieve job execution detailsโ€”start/end times, run duration, and task outcomesโ€”via:
Endpoint: GET /api/2.1/jobs/runs/list
https://docs.databricks.com/api/workspace/clusters?utm_source=chatgpt.com
Usage: Paginate through runs, then jobs/runs/get for detailed metrics on each run.
1.3 Clusters API (Events & Info)
Collect cluster lifecycle events (start, resize, terminate) and basic stats:
Events: GET /api/2.0/clusters/events
https://docs.databricks.com/api/workspace/clusters/events?utm_source=chatgpt.com
Cluster Info: GET /api/2.0/clusters/get?cluster_id=โ€ฆ
https://api-reference.cloud.databricks.com/workspace/clusters/get?utm_source=chatgpt.com
Usage: Build dashboards of cluster uptime, autoscaling events, and node counts over time.

## 2. Built-In Monitoring & Metrics
2.1 Compute Metrics UI
Databricks provides real-time hardware & Spark metrics (CPU, memory, tasks) in the Compute UIโ€”even without Ganglia.
Docs: View compute metrics in Databricks UI
https://docs.databricks.com/aws/en/compute/cluster-metrics?utm_source=chatgpt.com
Tip: Use these charts for ad hoc monitoring, or scrape via Selenium/REST for automation.
2.2 Ganglia Charts via REST
If you need historical Ganglia charts (pre-13.x), some have scripted calls to scrape the Ganglia API (undocumented).
Community Example: โ€œGet cluster metric (Ganglia charts)โ€
https://stackoverflow.com/questions/73505963/get-cluster-metric-ganglia-charts-of-all-clusters-via-r...
Caveat: Not officially supportedโ€”prefer the Compute Metrics UI or external exporters.

## 3. Audit Logs & System Tables
3.1 Workspace Audit Logs (Premium)
Enable workspace-level audit logs to capture user actions (table reads, notebook runs, cluster ops).
Reference: Audit log events list
https://docs.databricks.com/aws/en/admin/account-settings/audit-logs?utm_source=chatgpt.com
Delivery:
System Table: Query system.access.audit directly (public preview)
https://docs.databricks.com/aws/en/admin/system-tables/audit-logs?utm_source=chatgpt.com
S3/Blob: Configure JSON log delivery (low latency) to storage
https://docs.databricks.com/aws/en/admin/account-settings/audit-log-delivery?utm_source=chatgpt.com

Verbose Mode: Optionally turn on verbose audit logs to record every command/query text
https://docs.databricks.com/aws/en/admin/account-settings/verbose-logs?utm_source=chatgpt.com

3.2 Delta Live Tables Event Log
For DLT pipelines, each pipeline writes an event log (as a Delta table) capturing pipeline progress, data quality checks,
and audit entries.
Docs: Monitor DLT pipelines with the event log
https://learn.microsoft.com/en-us/answers/questions/1180376/azure-databricks-how-to-get-usage-statis...

## 4. Best Practices & Integration
4.1 Centralize Logs & Metrics
Ingest REST API outputs (jobs, clusters, queries) into a dedicated Delta table or external system (Time Series DB)
Archive audit logs in Parquet/Delta on S3/ADLS and query via Spark.
4.2 Dashboard & Alerting
Use Databricks SQL or external BI tools (Tableau, Power BI) on your metrics tables.
For real-time alerts, stream critical events (e.g., job failures) into Slack/MS Teams via webhooks.
4.3 Security & Access
Limit REST API tokens to read-only scopes.
Ensure audit-log storage buckets have least privileges and encryption at rest.

LR

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now