Have you ever noticed (and wondered) that the wonderful Spark Job UI is no longer available in the databricks notebook if the cell is executed using 'serverless' cluster?
Tradionally, whenever we run the spark code (action command), we used to see the Job Runs (job 1, Job2 etc) which navigate to the Spark famous UI with DAG and all run details. That view is not available if your notebook is attached to interactive serverless cluster.
Why:
Serverless abstraction: In serverless interactive clusters (and SQL Warehouses), Databricks fully manages the Spark driver and executors behind the scenes. You don’t get direct visibility into the JVM processes or Spark UI because those resources are ephemeral and multi‑tenant.
No dedicated driver node: The Spark driver that normally hosts the Spark UI (with DAG, stages, tasks) isn’t exposed in serverless mode. Databricks hides it to enforce isolation and simplify operations.
Security & multi‑tenancy: The Spark UI can reveal low‑level details about cluster internals. In shared/serverless environments, exposing that could leak information across tenants, so Databricks disables it.
Monitoring alternative
Instead of the Spark UI, Databricks provides query history, execution plans (EXPLAIN), and job run details in the Databricks workspace. These give you visibility into performance without exposing the raw DAG view.
Query History (SQL Warehouses): Shows query execution times, resource usage, and status.
Job Run Details (Jobs UI): For scheduled pipelines, you can see task durations, logs, and outcomes.
df.explain(True): Prints the physical plan, which is the closest to DAG inspection in serverless mode.
Metrics in Databricks UI: Cluster metrics, query profiles, and Delta Live Tables monitoring dashboards.
Happy serverless!!! 🙂
RG #Driving Business Outcomes with Data Intelligence