Hi @AJ270990 , I did some digging and I have some helpful tips to guide you.
This is one of those areas where the answer is โit depends,โ based on which flavor of serverless compute youโre using. Letโs break it down so itโs easy to reason about.
First, serverless SQL warehouses.
Here, โidle timeoutโ is essentially governed by the Auto Stop setting on the warehouse. Think of Auto Stop as the mechanism that decides when to shut things down after activity stops.
What it means in practice: once the warehouse has been idle for the configured number of minutes, it will automatically stop. Until that happens, even if nothing is running, it can still accrue DBU and cloud costs โ so this setting matters more than people initially expect.
A couple of guardrails to keep in mind:
-
The default idle timeout is 10 minutes.
-
The minimum you can set in the UI is 5 minutes.
-
If youโre creating warehouses via the API, you can push that as low as 1 minute.
If you want to dig deeper, the โCreate a SQL warehouseโ documentation walks through the configuration options in detail (same structure across AWS, Azure, and GCP).
Now, serverless compute for notebooks and workflows โ slightly different story.
In this case, Databricks abstracts away the cluster lifecycle, so you donโt get a direct โidle timeoutโ knob for the REPL or session. In other words, youโre not explicitly telling it when to shut down due to inactivity.
What you do have, though, is an execution timeout โ sometimes referred to as overspend protection.
By default:
If needed, you can tune this at the notebook level using the Spark config:
spark.databricks.execution.timeout
So the mental model is:
-
SQL warehouses โ you control idle shutdown via Auto Stop.
-
Notebooks/workflows โ Databricks manages idling, and you control max execution time instead.
Net takeaway: if your goal is cost control, Auto Stop is your lever for SQL warehouses, while execution timeout is your main safeguard on the notebook side.
Hope this helps, Louis.