05-15-2025 08:29 AM
We are running Databricks on GCP with a classic SQL warehouse. Its on the current version (v 2025.15)
We have a pipeline that runs DBT on top of the SQL warehouse
Since the 9th of May, our queries have been failing intermittently with internal errors from Databricks that look like this. We were getting these kind of issues before, but they were one off. But now they are hampering our production pipeline.
How can this issue be fixed?
Thank you in advance for the help
05-15-2025 11:01 AM
The error messages you've shared—such as:
-- [INTERNAL_ERROR] Query could not be scheduled: HTTP Response code: 503
-- ExecutorLostFailure ... exited with code 134, sigabrt
-- Internal error
—indicate that your Databricks SQL warehouse on GCP (v2025.15) is encountering intermittent internal issues likely tied to:
Root Causes
1. Databricks Platform Instability or Bugs (Post May 9 Update)
-- Since you're observing a change in behavior after May 9, it's possible the recent version or backend updates introduced bugs or instability.
2. SQL Warehouse Resource Exhaustion or Scheduling Delay
-- Code 503 is often due to temporary overload or service unavailability.
-- The sigabrt + ExecutorLostFailure may be from exceeding memory limits or a critical failure in executor management.
3. Concurrency and Load Patterns
If your DBT runs or other jobs were scaled up or changed recently, they might be exceeding the SQL
warehouse's concurrency or memory capacity.
Recommended Actions
1. Switch to Pro SQL Warehouse (If Not Already)
-- Classic SQL Warehouses are more prone to instability.
-- Pro or Serverless SQL Warehouses offer auto-scaling, better fault tolerance, and enhanced scheduling.
2. Enable Query Retry in DBT
-- Add automatic retry logic around DBT SQL models using macros or a retry decorator for flaky jobs.
3. Increase Warehouse Size / Concurrency Slots
If you're seeing resource contention, increase the SQL warehouse size to provide more memory and better scheduling.
4. Check DBT Query Footprint
DESCRIBE HISTORY <table>;
or query system.query_log to investigate any long-running or memory-intensive queries introduced recently.
5. Open a Databricks Support Ticket
a month ago
Hi @lingareddy_Alva
Thank you for the quick reply and suggestions
05-17-2025 11:45 AM
Hi @utkarshamone ,
We faced a similar issue and I wanted to share our findings, which might help clarify what’s going on.
We’re using a Classic SQL Warehouse size L (v2025.15), and executing a dbt pipeline on top of it.
Our dbt jobs started to fail with internal Databricks errors and are affecting our production pipeline too.
Then I checked the pipeline in depth and saw the following in the query profile and Spark UI
Classic Warehouse (FAILED)
Execution details:
Serverless Warehouse (SUCCEEDED)
Execution details:
We reported this to Databricks support.They confirmed:
"Engineering identified the root cause and has prepared a fix.
It will be included in the next maintenance cycle, scheduled for end of May 2025."
Until the fix is deployed:
Hope this helps! 🙂
Isi
a month ago
Hi @Isi
Thanks for your reply!
Will look into changing the warehouse type
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now