GCP Cluster will not boot correctly with Libraries preconfigured - notebooks never attach
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2025 07:15 PM
I am running Databricks 15.4 LTS on a single-node `n1-highmem-32` for a PySpark / GraphFrames app (not using builtin `graphframes` on ML image because we don't need a GPU) and I can start the cluster fine so long as libraries are not attached. I can then configure libraries: GraphFrames via Spark Packages using the Maven UI and our package `whl` and `requirements.txt` that I have uploaded to a volume. Everything works fine, I can use the cluster, import `from graphframes import GraphFrame` and all is well.
Then I stop the cluster. The Libraries are still configured as seen below.
Now I boot the cluster again. The cluster says it is done booting. The libraries spinner says complete. I try to attach and run a notebook... it will sit there forever. It will never attach. Finally there is this exception:
Failure starting repl. Try detaching and re-attaching the notebook. at com.databricks.spark.chauffeur.ExecContextState.processInternalMessage(ExecContextState.scala:347) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034)
This is a blocker for us, and seems like a bug.
What should I do about this? I am stuck. I can't automate this in a workflow because of this bug that requires manual intervention. We don't have Databricks support at this point, so I am here asking questions 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
Bump... anyone?

