Internal GRPC errors when using databricks connect
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Monday
Hey there,
in our local development flow we heavily rely on databricks asset bundles and databricks connect. Recently, locally run workflows (i.e. just pyspark python files) have begun to frequently fail with the following grpc error:
pyspark.errors.exceptions.connect.SparkConnectGrpcException: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.INTERNAL
details = "Cannot operate on a handle that is closed."
debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"Cannot operate on a handle that is closed.", grpc_status:13, created_time:"2025-03-17T15:51:24.396549+01:00"}"
This error is non-deterministic and cluster restarts sometimes allow us to run workflows once or twice before the error is appearing again. Might also be coincidental due to the non-deterministic nature, but it seems that some pypsark code fails more often with this error than others.
databricks-connect version: 15.4.7
databricks-sdk: 0.29.0
cluster runtime: 15.4 LTS (includes Apache Spark 3.5.0, Scala 2.12)
Researching this error returns basically zero results, so I'm asking if someone else has received and solved this before or this is some known issue?
Thanks!

