Internal error: Attach your notebook to a different compute or restart the current compute.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2023 01:52 PM
I am currently using a personal computer cluster [13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)] on GCP attached to a notebook. After running a few command lines without an issue, I end up getting this error
Internal error. Attach your notebook to a different compute or restart the current compute.
java.lang.RuntimeException: abort: DriverClient destroyed
at com.databricks.backend.daemon.driver.DriverClient.$anonfun$poll$3(DriverClient.scala:577) at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:54) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:77) at com.databricks.threading.DatabricksExecutionContext$InstrumentedRunnable.run(DatabricksExecutionContext.scala:36) at com.databricks.threading.NamedExecutor$$anon$2.$anonfun$run$2(NamedExecutor.scala:366) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:420) at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418) at com.databricks.threading.NamedExecutor.withAttributionContext(NamedExecutor.scala:285) at com.databricks.threading.NamedExecutor$$anon$2.$anonfun$run$1(NamedExecutor.scala:364) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.context.integrity.IntegrityCheckContext$ThreadLocalStorage$.withValue(IntegrityCheckContext.scala:44) at com.databricks.threading.NamedExecutor$$anon$2.run(NamedExecutor.scala:356) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
Within the databricks UI, I get the following errors:
When I restart my cluster, I am able to run queries without any errors 10-15 times but then I end up getting this same error again. I tried running my queries with a new cluster but i face the same errors after 10-15 runs.
This seems to be a user isolated error since a colleague tried using his own personal compute cluster and was able to run queries without any issues.
Here are the things I have tried so far while logged into my accounting:
- Creating a completely new cluster -- Still having issues after a couple of queries
- Restarting my existing cluster -- Still having issues after a couple of queries
- Open a new incognito tab and run queries -- Still get the same errors
Only thing left for me to test is try logging into Databricks and running my queries on a separate device.
Please let me know if anyone else has faced this issue previously and if there is any way to resolve this.
Thanks,
- Labels:
-
Cluster
-
Databricks Notebooks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2023 06:10 AM
Hey Kaniz,
Here is my cluster configuration
And here are the advanced options
Logging and Init Scripts are set to default and the Google Service Account is linked to my databricks-artifact-registry