cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Internal errors when running SQLs

utkarshamone
New Contributor II

We are running Databricks on GCP with a classic SQL warehouse. Its on the current version (v 2025.15)

We have a pipeline that runs DBT on top of the SQL warehouse

Since the 9th of May, our queries have been failing intermittently with internal errors from Databricks that look like this. We were getting these kind of issues before, but they were one off. But now they are hampering our production pipeline.

How can this issue be fixed?

Thank you in advance for the help

1 REPLY 1

lingareddy_Alva
Honored Contributor II

Hi @utkarshamone 

The error messages you've sharedโ€”such as:

-- [INTERNAL_ERROR] Query could not be scheduled: HTTP Response code: 503
-- ExecutorLostFailure ... exited with code 134, sigabrt
-- Internal error

โ€”indicate that your Databricks SQL warehouse on GCP (v2025.15) is encountering intermittent internal issues likely tied to:

Root Causes
1. Databricks Platform Instability or Bugs (Post May 9 Update)
-- Since you're observing a change in behavior after May 9, it's possible the recent version or backend updates introduced bugs or instability.

2. SQL Warehouse Resource Exhaustion or Scheduling Delay
-- Code 503 is often due to temporary overload or service unavailability.
-- The sigabrt + ExecutorLostFailure may be from exceeding memory limits or a critical failure in executor management.

3. Concurrency and Load Patterns
If your DBT runs or other jobs were scaled up or changed recently, they might be exceeding the SQL
warehouse's concurrency or memory capacity.


Recommended Actions
1. Switch to Pro SQL Warehouse (If Not Already)
-- Classic SQL Warehouses are more prone to instability.
-- Pro or Serverless SQL Warehouses offer auto-scaling, better fault tolerance, and enhanced scheduling.
2. Enable Query Retry in DBT
-- Add automatic retry logic around DBT SQL models using macros or a retry decorator for flaky jobs.

3. Increase Warehouse Size / Concurrency Slots
If you're seeing resource contention, increase the SQL warehouse size to provide more memory and better scheduling.
4. Check DBT Query Footprint
DESCRIBE HISTORY <table>;

or query system.query_log to investigate any long-running or memory-intensive queries introduced recently.

5. Open a Databricks Support Ticket

 

 

 

LR

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now