Internal error. Attach your notebook to a different compute or restart the current compute. java.lan

amandaolens
Databricks Partner
Internal error. Attach your notebook to a different compute or restart the current compute.
java.lang.RuntimeException: abort: DriverClient destroyed at com.databricks.backend.daemon.driver.DriverClient.$anonfun$poll$3(DriverClient.scala:577) at scala.concurrent.Future.$anonfun$flatMap$1(Future.scala:307) at scala.concurrent.impl.Promise.$anonfun$transformWith$1(Promise.scala:54) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:77) at com.databricks.threading.DatabricksExecutionContext$InstrumentedRunnable.run(DatabricksExecutionContext.scala:36) at com.databricks.threading.NamedExecutor$$anon$2.$anonfun$run$2(NamedExecutor.scala:366) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:420) at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:418) at com.databricks.threading.NamedExecutor.withAttributionContext(NamedExecutor.scala:285) at com.databricks.threading.NamedExecutor$$anon$2.$anonfun$run$1(NamedExecutor.scala:364) at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23) at com.databricks.context.integrity.IntegrityCheckContext$ThreadLocalStorage$.withValue(IntegrityCheckContext.scala:44) at com.databricks.threading.NamedExecutor$$anon$2.run(NamedExecutor.scala:356) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
 
 
Facing this error.spark is getting crashed automatically

amandaolens
Databricks Partner

 @Retired_mod  but same code was working 5 days ago.

amandaolens
Databricks Partner

nope.i'm still facing the issue.event though the cpu and memory usage is also under 50%.

same piece of code is working on 1 dataset and failing on other.few days back it was working fine for all the datasets

amandaolens
Databricks Partner

@Retired_mod yeah, in one dataset there are slightly higher data points.schema are same.when the spark crashed i have checked the memory usage.it was around 50%.

LokeshManne
New Contributor III

The error is caused due to overlap of connectors or instance, if you see an error as below:

LokeshManne_1-1747303303000.png

 

And you can see Multiple clusters with same name, which is caused due to running the notebook_1 under a cluster attached to it and re-running a notebook_2 with %run / sub notebook run, which is connect to the same notebook_1 cluster. Which cause below scenario: Same cluster under two resource. 

LokeshManne_0-1747303285230.png

Solution:
Detach the cluster from notebook_1 & notebook_2,

Now refresh the web page and re-attach the cluster to notebook_2, run individual cells one by one not RUN_ALL at a time. Once you've run two to three cells successfully. Now Run All and test, it will work as below.

LokeshManne_2-1747304069419.png

Lokesh Manne