cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Internal errors when running SQLs

utkarshamone
New Contributor II

We are running Databricks on GCP with a classic SQL warehouse. Its on the current version (v 2025.15)

We have a pipeline that runs DBT on top of the SQL warehouse

Since the 9th of May, our queries have been failing intermittently with internal errors from Databricks that look like this. We were getting these kind of issues before, but they were one off. But now they are hampering our production pipeline.

How can this issue be fixed?

Thank you in advance for the help

Screenshot 2025-05-15 at 4.51.49 pm.pngScreenshot 2025-05-15 at 5.23.57 pm.pngScreenshot 2025-05-15 at 5.24.12 pm.png

4 REPLIES 4

lingareddy_Alva
Honored Contributor II

@utkarshamone 

The error messages you've shared—such as:

-- [INTERNAL_ERROR] Query could not be scheduled: HTTP Response code: 503
-- ExecutorLostFailure ... exited with code 134, sigabrt
-- Internal error

—indicate that your Databricks SQL warehouse on GCP (v2025.15) is encountering intermittent internal issues likely tied to:

Root Causes
1. Databricks Platform Instability or Bugs (Post May 9 Update)
-- Since you're observing a change in behavior after May 9, it's possible the recent version or backend updates introduced bugs or instability.

2. SQL Warehouse Resource Exhaustion or Scheduling Delay
-- Code 503 is often due to temporary overload or service unavailability.
-- The sigabrt + ExecutorLostFailure may be from exceeding memory limits or a critical failure in executor management.

3. Concurrency and Load Patterns
If your DBT runs or other jobs were scaled up or changed recently, they might be exceeding the SQL
warehouse's concurrency or memory capacity.


Recommended Actions
1. Switch to Pro SQL Warehouse (If Not Already)
-- Classic SQL Warehouses are more prone to instability.
-- Pro or Serverless SQL Warehouses offer auto-scaling, better fault tolerance, and enhanced scheduling.
2. Enable Query Retry in DBT
-- Add automatic retry logic around DBT SQL models using macros or a retry decorator for flaky jobs.

3. Increase Warehouse Size / Concurrency Slots
If you're seeing resource contention, increase the SQL warehouse size to provide more memory and better scheduling.
4. Check DBT Query Footprint
DESCRIBE HISTORY <table>;

or query system.query_log to investigate any long-running or memory-intensive queries introduced recently.

5. Open a Databricks Support Ticket

 

LR

Hi @lingareddy_Alva 
Thank you for the quick reply and suggestions

Isi
Contributor III

Hi @utkarshamone ,

We faced a similar issue and I wanted to share our findings, which might help clarify what’s going on.

We’re using a Classic SQL Warehouse size L (v2025.15), and executing a dbt pipeline on top of it.

Our dbt jobs started to fail with internal Databricks errors and are affecting our production pipeline too.

Then I checked the pipeline in depth and saw the following in the query profile and Spark UI

Classic Warehouse (FAILED)

Execution details:

  • Fixed 256 shuffle partitions
  • Fails in: PhotonUnionShuffleExchangeSink
    • Peak memory total ≈ 91.9 GiB
    • 0 rows output
    • Multiple executors exited with code 134 (SIGABRT)
  • Spill = 0 bytes (crashes before spilling)
  • Dead executors, hundreds of failed tasks
  • Off‑heap memory peak = 7–8 GiB before crash
  • Input: 213 GiB read, 671 M rows
  • Task time in Photon = 18 %
  • My analysis: Photon may under-estimate memory requirements during the union shuffle. One partition becomes too large (“elephant”), exceeds executor memory, malloc fails, and triggers SIGABRT.

 

Serverless Warehouse (SUCCEEDED)

Execution details:

  • AQE enabled, partitions dynamically adjusted (~2,000 early, coalesced later)
  • Sort operators: 52 GiB / 46 GiB total
  • ShuffleExchange: Peak memory = 18 GiB, Peak per-task ≈ 280 MiB
  • No executor losses
  • Spill = 0 bytes
  • Failed Tasks = 0
  • Runtime: 1 min 46 s
  • Task time in Photon = 99 %
  • My analysis: AQE + newer Photon version effectively balances partitions and avoids memory hotspots.

 

We reported this to Databricks support.They confirmed:

"Engineering identified the root cause and has prepared a fix.
It will be included in the next maintenance cycle, scheduled for end of May 2025."

Until the fix is deployed:

  • Check the query profile and Spark UI to identify where the hotspot occurs
  • Switch to Serverless SQL Warehouse provisionally for production dbt pipelines (stable + memory-safe)
  • Reevaluate using Classic at the end of May, once the new version is available

 

Hope this helps! 🙂

Isi

utkarshamone
New Contributor II

Hi @Isi 
Thanks for your reply!

Will look into changing the warehouse type

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now