cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

SQLSTATE HY000 after upgrading from Databricks 15.4 to 16.4

ekmazars
New Contributor
After upgrading from Databricks 15.4 to 16.4, without changing our Python code, we suddenly get SQL Timeouts, see below.
Is there some new timeout default, that we don't know about, that we need to increase with the new version?
 
After a quick search I did not find anything about it.
 
Thanks!
 
[INVALID_HANDLE.OPERATION_ABANDONED] The handle 4ad5e1ac-2419-4737-abd0-0a03376eed04 is invalid. Operation was considered abandoned because of inactivity and removed. SQLSTATE: HY000
 
1 ACCEPTED SOLUTION

Accepted Solutions

mark_ott
Databricks Employee
Databricks Employee

After upgrading to Databricks 16.4, there is a notable change in SQL timeout behavior. The default timeout for SQL statements and objects like materialized views and streaming tables is now set to two days (172,800 seconds). This system-wide default may cause previously long-running queries to fail if they exceed the configured timeout, potentially leading to the SQL timeout errors you’re encountering.

Key Timeout Changes in Databricks 16.4

  • The default STATEMENT_TIMEOUT parameter for Databricks SQL statements is set to two days (172,800 seconds).

  • Materialized views and streaming tables created after August 2025 automatically inherit the warehouse’s timeout, also defaulting to two days.

  • For jobs and queries running on serverless compute, the default execution timeout is 9,000 seconds (2.5 hours) unless overridden by the configuration property spark.databricks.execution.timeout. No timeout is applied to jobs on other compute types unless explicitly set.

  • Notebooks attached to SQL warehouses have an idle execution context timeout of 8 hours, which is unchanged from prior releases.

What to Do

  • Increase the timeout if your SQL jobs regularly exceed these new limits. You can do this at the session, workspace, or job level using the STATEMENT_TIMEOUT property.

  • For jobs on serverless compute, adjust spark.databricks.execution.timeout if needed.

  • Make sure existing materialized views and streaming tables are refreshed to synchronize with new timeout settings.

Setting the Timeout

  • For SQL statements:
    SET STATEMENT_TIMEOUT = <number of seconds>;
    Example: SET STATEMENT_TIMEOUT = 86400; for 24 hours.

  • For workspace-wide changes: Go to workspace admin settings > Compute > Manage SQL warehouses, and update the SQL Configuration Parameters.

  • For Spark Connect: Use the Spark config property:
    spark.databricks.execution.timeout = <number of seconds>.

View solution in original post

2 REPLIES 2

mark_ott
Databricks Employee
Databricks Employee

After upgrading to Databricks 16.4, there is a notable change in SQL timeout behavior. The default timeout for SQL statements and objects like materialized views and streaming tables is now set to two days (172,800 seconds). This system-wide default may cause previously long-running queries to fail if they exceed the configured timeout, potentially leading to the SQL timeout errors you’re encountering.

Key Timeout Changes in Databricks 16.4

  • The default STATEMENT_TIMEOUT parameter for Databricks SQL statements is set to two days (172,800 seconds).

  • Materialized views and streaming tables created after August 2025 automatically inherit the warehouse’s timeout, also defaulting to two days.

  • For jobs and queries running on serverless compute, the default execution timeout is 9,000 seconds (2.5 hours) unless overridden by the configuration property spark.databricks.execution.timeout. No timeout is applied to jobs on other compute types unless explicitly set.

  • Notebooks attached to SQL warehouses have an idle execution context timeout of 8 hours, which is unchanged from prior releases.

What to Do

  • Increase the timeout if your SQL jobs regularly exceed these new limits. You can do this at the session, workspace, or job level using the STATEMENT_TIMEOUT property.

  • For jobs on serverless compute, adjust spark.databricks.execution.timeout if needed.

  • Make sure existing materialized views and streaming tables are refreshed to synchronize with new timeout settings.

Setting the Timeout

  • For SQL statements:
    SET STATEMENT_TIMEOUT = <number of seconds>;
    Example: SET STATEMENT_TIMEOUT = 86400; for 24 hours.

  • For workspace-wide changes: Go to workspace admin settings > Compute > Manage SQL warehouses, and update the SQL Configuration Parameters.

  • For Spark Connect: Use the Spark config property:
    spark.databricks.execution.timeout = <number of seconds>.

Thanks for your detailed reply!

Turns out the timeouts were reached because of networking issues.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now