cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Intermittent task execution issues

Malthe
Contributor III

We're getting intermittent errors:

[ISOLATION_STARTUP_FAILURE.SANDBOX_STARTUP] Failed to start isolated execution environment. Sandbox startup failed.
Exception class: INTERNAL.
Exception message: INTERNAL: LaunchSandboxRequest create failed - Error executing LivenessCheckStep: failed to perform livenessCommand for container [REDACTED] with commands sh -c (grep -qE ':[0]*1F40' /proc/net/tcp) || (grep -qE ':[0]*1F40' /proc/net/tcp6) || (echo "Error: No process listening on port 8000" && exit 1) and error max deadline has passed, failed to perform livenessCommand for container [REDACTED] with error , cpu.stat: NrPeriods = 0,  NrThrottled = 0, ThrottledTime = 0.
Last sandbox stdout: .
Last sandbox stderr: .
Please contact Databricks support. SQLSTATE: XXKSS

These take several minutes to "complete" (i.e. fail) and retrying seems to repeat the issue. This is just one of the ways we need to babysit our ETL jobs every now and then. This is on serverless compute, but it can happen on other types of compute as well.

Is Databricks aware of these issues and monitoring this?

3 REPLIES 3

sandy_123
New Contributor II

Hi @Malthe ,

This might be because of New DBR (18.0) GA release yesterday(January 2026 - Azure Databricks | Microsoft Learn). you might need to use a custom spark version by the time engineering team fixes this issue in DBR. Below is the response from Databricks Support for similar sort of problem.

"

There was a DBR release on 7th August (14.3.10 -> 14.3.11).
Our engineering team identified the issue and the fix is scheduled to be deployed on September 16th.

Until the fix is deployed, You can use the below custom spark image version in your cluster.

enter the below in the Custom Spark Version: The custom image provided was the old DBR prior to 8th August.
custom:release__14.3.x-snapshot-scala2.12__databricks-universe__14.3.10__9b6cd4f__debafb7__jenkins__1cbb705__format-3

"

link to instruction how to enable definition of custom spark version 

Run a custom Databricks Runtime on your cluster - Databricks

Malthe
Contributor III

According to https://learn.microsoft.com/en-us/azure/databricks/release-notes/serverless/, 17.3 is the latest release for serverless and we're on Serverless Environment 4.

Here's the trackback:

File /databricks/python/lib/python3.12/site-packages/pyspark/sql/connect/client/core.py:2433, in SparkConnectClient._handle_rpc_error(self, rpc_error)
   2429             logger.debug(f"Received ErrorInfo: {info}")
   2431             self._handle_rpc_error_with_error_info(info, status.message, status_code)  # EDGE
-> 2433             raise convert_exception(
   2434                 info,
   2435                 status.message,
   2436                 self._fetch_enriched_error(info),
   2437                 self._display_server_stack_trace(),
   2438                 status_code,
   2439             ) from None
   2441     raise SparkConnectGrpcException(
   2442         message=status.message,
   2443         sql_state=ErrorCode.CLIENT_UNEXPECTED_MISSING_SQL_STATE,  # EDGE
   2444         grpc_status_code=status_code,
   2445     ) from None
   2446 else:

 It happened during a Delta Lake merge operation and just now again (same exact task out of dozens of tasks in our job).

aleksandra_ch
Databricks Employee
Databricks Employee

Hi @Malthe ,

Please check if custom Spark image is used in the jobs. If it is, try to remove it and stick to default parameters.

If not, I highly recommend to open a support ticket (assuming you are on Azure Databricks) via Azure portal. 

Best regards,