cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks Spark UI showing -1 Executors

Kirankumarbs
Contributor

Hi Community,

This might be a basic question, but I’m asking for educational purposes.

I noticed that in one of my jobs, the Spark UI shows -1 executors. Initially, I thought this might indicate that executors are idle, but that doesn’t seem to explain it fully. The job runs in a burst/triggered manner, so I would at least expect to see spikes in the Spark metrics view (e.g., Active Tasks). However, it shows a flat line at -1 for a long time, which doesn’t reflect the actual behavior.

Screenshot 2026-03-03 at 09.14.39.pngScreenshot 2026-03-03 at 09.14.00.png

Is this a known issue or possibly a bug?

1 ACCEPTED SOLUTION

Accepted Solutions

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @Kirankumarbs , I did some quick research and found some helpful information. 

The -1 in the Executors tab is a Spark sentinel for "unknown / not available." The executor itself is active — that "Active(1)" label is accurate — but Spark doesn't have a current value for the active tasks metric, so it falls back to -1. The UI renders that directly, hence the flat line.

Nothing is wrong with the job. Check the Jobs and Stages tabs for real task activity.

 

Hope this helps, Louis.

View solution in original post

2 REPLIES 2

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @Kirankumarbs , I did some quick research and found some helpful information. 

The -1 in the Executors tab is a Spark sentinel for "unknown / not available." The executor itself is active — that "Active(1)" label is accurate — but Spark doesn't have a current value for the active tasks metric, so it falls back to -1. The UI renders that directly, hence the flat line.

Nothing is wrong with the job. Check the Jobs and Stages tabs for real task activity.

 

Hope this helps, Louis.

SteveOstrowski
Databricks Employee
Databricks Employee

Hi @Kirankumarbs,

The -1 value you are seeing for executors in the Spark UI depends on which type of compute your job is running on, so let me cover both scenarios.

SERVERLESS COMPUTE

If your job is running on serverless compute, this is the expected explanation. The Spark UI is not fully supported on serverless compute. Per the documentation, the Spark UI and Spark logs are not available for serverless workloads. When the UI does partially render, metrics like executor count return -1 as a placeholder because the underlying executor model is abstracted away on serverless. Databricks manages the compute resources transparently, so traditional Spark executor metrics do not apply in the same way.

For serverless jobs, you should use Query Profile instead of the Spark UI to examine execution details. You can access Query Profile from the job run output page or from the SQL warehouse query history. It gives you stage-level and operator-level breakdowns that replace what you would normally look at in the Spark UI.

Docs reference: https://docs.databricks.com/en/compute/serverless.html (see the "Limitations" section)

CLASSIC COMPUTE WITH AUTOSCALING / DYNAMIC ALLOCATION

If your job is running on a classic (provisioned) cluster with autoscaling enabled, the -1 value can appear when dynamic allocation has released all executors during idle periods between bursts. With dynamic allocation, Spark removes executors that have been idle and adds them back when new tasks arrive. Between bursts of a triggered job, there may be no active executors, and the metrics endpoint can report -1 as a sentinel value indicating "no executor data available."

A few things to check in this case:

1. Look at the Executors tab in the Spark UI (not just the metrics graph). It will show the history of executors added and removed, which gives a clearer picture than the summary graph.

2. Check your cluster's autoscaling configuration. If your minimum workers is set to 0, the cluster can scale all the way down between bursts. Setting a minimum of at least 1 worker keeps at least one executor alive, which would prevent the -1 display.

3. Review the Ganglia or Compute Metrics tab for the cluster. These show hardware-level metrics (CPU, memory) over time and can help you confirm whether executors were actually running during your job's burst periods.

Docs reference: https://docs.databricks.com/en/compute/cluster-metrics.html

WHAT TO DO NEXT

To narrow this down:

1. Confirm which compute type your job uses. Go to the job configuration and check whether "Serverless" is selected or whether it is using a job cluster or existing all-purpose cluster.

2. If serverless, switch to using Query Profile for monitoring. The -1 in the Spark UI is expected and not a bug in that context.

3. If classic compute, check the Executors tab and the cluster event log (Compute > your cluster > Event Log) to see executor add/remove events. This will confirm whether dynamic allocation is cycling executors between your triggered runs.

4. If you need consistent executor metrics on classic compute, you can disable dynamic allocation by setting spark.dynamicAllocation.enabled to false in the Spark config, though this means you will need to manually size your cluster.

Let us know which compute type you are using and we can help dig further.

* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.

If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.