Hi @AJ270990 , I did some digging and I have some helpful tips to guide you.
This is one of those areas where the answer is “it depends,” based on which flavor of serverless compute you’re using. Let’s break it down so it’s easy to reason about.
First, serverless SQL warehouses.
Here, “idle timeout” is essentially governed by the Auto Stop setting on the warehouse. Think of Auto Stop as the mechanism that decides when to shut things down after activity stops.
What it means in practice: once the warehouse has been idle for the configured number of minutes, it will automatically stop. Until that happens, even if nothing is running, it can still accrue DBU and cloud costs — so this setting matters more than people initially expect.
A couple of guardrails to keep in mind:
-
The default idle timeout is 10 minutes.
-
The minimum you can set in the UI is 5 minutes.
-
If you’re creating warehouses via the API, you can push that as low as 1 minute.
If you want to dig deeper, the “Create a SQL warehouse” documentation walks through the configuration options in detail (same structure across AWS, Azure, and GCP).
Now, serverless compute for notebooks and workflows — slightly different story.
In this case, Databricks abstracts away the cluster lifecycle, so you don’t get a direct “idle timeout” knob for the REPL or session. In other words, you’re not explicitly telling it when to shut down due to inactivity.
What you do have, though, is an execution timeout — sometimes referred to as overspend protection.
By default:
If needed, you can tune this at the notebook level using the Spark config:
spark.databricks.execution.timeout
So the mental model is:
-
SQL warehouses → you control idle shutdown via Auto Stop.
-
Notebooks/workflows → Databricks manages idling, and you control max execution time instead.
Net takeaway: if your goal is cost control, Auto Stop is your lever for SQL warehouses, while execution timeout is your main safeguard on the notebook side.
Hope this helps, Louis.