โ09-30-2025 02:25 AM
โ10-01-2025 06:19 AM
After upgrading to Databricks 16.4, there is a notable change in SQL timeout behavior. The default timeout for SQL statements and objects like materialized views and streaming tables is now set to two days (172,800 seconds). This system-wide default may cause previously long-running queries to fail if they exceed the configured timeout, potentially leading to the SQL timeout errors youโre encountering.
The default STATEMENT_TIMEOUT parameter for Databricks SQL statements is set to two days (172,800 seconds).
Materialized views and streaming tables created after August 2025 automatically inherit the warehouseโs timeout, also defaulting to two days.
For jobs and queries running on serverless compute, the default execution timeout is 9,000 seconds (2.5 hours) unless overridden by the configuration property spark.databricks.execution.timeout. No timeout is applied to jobs on other compute types unless explicitly set.
Notebooks attached to SQL warehouses have an idle execution context timeout of 8 hours, which is unchanged from prior releases.
Increase the timeout if your SQL jobs regularly exceed these new limits. You can do this at the session, workspace, or job level using the STATEMENT_TIMEOUT property.
For jobs on serverless compute, adjust spark.databricks.execution.timeout if needed.
Make sure existing materialized views and streaming tables are refreshed to synchronize with new timeout settings.
For SQL statements:SET STATEMENT_TIMEOUT = <number of seconds>;
Example: SET STATEMENT_TIMEOUT = 86400; for 24 hours.
For workspace-wide changes: Go to workspace admin settings > Compute > Manage SQL warehouses, and update the SQL Configuration Parameters.
For Spark Connect: Use the Spark config property:spark.databricks.execution.timeout = <number of seconds>.
โ10-01-2025 06:19 AM
After upgrading to Databricks 16.4, there is a notable change in SQL timeout behavior. The default timeout for SQL statements and objects like materialized views and streaming tables is now set to two days (172,800 seconds). This system-wide default may cause previously long-running queries to fail if they exceed the configured timeout, potentially leading to the SQL timeout errors youโre encountering.
The default STATEMENT_TIMEOUT parameter for Databricks SQL statements is set to two days (172,800 seconds).
Materialized views and streaming tables created after August 2025 automatically inherit the warehouseโs timeout, also defaulting to two days.
For jobs and queries running on serverless compute, the default execution timeout is 9,000 seconds (2.5 hours) unless overridden by the configuration property spark.databricks.execution.timeout. No timeout is applied to jobs on other compute types unless explicitly set.
Notebooks attached to SQL warehouses have an idle execution context timeout of 8 hours, which is unchanged from prior releases.
Increase the timeout if your SQL jobs regularly exceed these new limits. You can do this at the session, workspace, or job level using the STATEMENT_TIMEOUT property.
For jobs on serverless compute, adjust spark.databricks.execution.timeout if needed.
Make sure existing materialized views and streaming tables are refreshed to synchronize with new timeout settings.
For SQL statements:SET STATEMENT_TIMEOUT = <number of seconds>;
Example: SET STATEMENT_TIMEOUT = 86400; for 24 hours.
For workspace-wide changes: Go to workspace admin settings > Compute > Manage SQL warehouses, and update the SQL Configuration Parameters.
For Spark Connect: Use the Spark config property:spark.databricks.execution.timeout = <number of seconds>.
โ10-02-2025 02:29 AM
Thanks for your detailed reply!
Turns out the timeouts were reached because of networking issues.
Passionate about hosting events and connecting people? Help us grow a vibrant local communityโsign up today to get started!
Sign Up Now