cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

The spark driver has stopped unexpectedly and is restarting. Your notebook will be automatically reattached.

JKR
New Contributor III

Getting below error

Context: Using Databricks shared interactive cluster for scheduled run multiple parallel jobs at the same time after every 5 mins. When I check Ganglia, driver node's memory reaches almost max and then restart of driver happens and the same process repeats. I'm not using any of the below operations:

  • collect() operator, which brings a large amount of data to the driver.
  • Conversion of a large DataFrame to Pandas DataFrame using the toPandas() function.

java.lang.OutOfMemoryError: unable to create new native thread

at java.lang.Thread.start0(Native Method)

at java.lang.Thread.start(Thread.java:719)

at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)

at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)

at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:24)

at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)

at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete(Promise.scala:372)

at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete$(Promise.scala:371)

at scala.concurrent.impl.Promise$KeptPromise$Successful.onComplete(Promise.scala:379)

at scala.concurrent.impl.Promise.transform(Promise.scala:33)

at scala.concurrent.impl.Promise.transform$(Promise.scala:31)

at scala.concurrent.impl.Promise$KeptPromise$Successful.transform(Promise.scala:379)

at scala.concurrent.Future.map(Future.scala:292)

at scala.concurrent.Future.map$(Future.scala:292)

at scala.concurrent.impl.Promise$KeptPromise$Successful.map(Promise.scala:379)

at scala.concurrent.Future$.apply(Future.scala:659)

at com.databricks.backend.daemon.driver.JupyterKernelListener$BackgroundPollTask.start(JupyterKernelListener.scala:174)

at com.databricks.backend.daemon.driver.JupyterKernelListener.<init>(JupyterKernelListener.scala:340)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.$anonfun$startPython$1(JupyterDriverLocal.scala:708)

at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

at scala.util.Try$.apply(Try.scala:213)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.com$databricks$backend$daemon$driver$JupyterDriverLocal$$withRetry(JupyterDriverLocal.scala:663)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.startPython(JupyterDriverLocal.scala:680)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.<init>(JupyterDriverLocal.scala:403)

at com.databricks.backend.daemon.driver.PythonDriverWrapper.instantiateDriver(DriverWrapper.scala:781)

at com.databricks.backend.daemon.driver.DriverWrapper.setupRepl(DriverWrapper.scala:350)

at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:246)

at java.lang.Thread.run(Thread.java:750)

java.lang.OutOfMemoryError: unable to create new native thread

at java.lang.Thread.start0(Native Method)

at java.lang.Thread.start(Thread.java:719)

at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)

at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1367)

at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:24)

at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)

at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete(Promise.scala:372)

at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete$(Promise.scala:371)

at scala.concurrent.impl.Promise$KeptPromise$Successful.onComplete(Promise.scala:379)

at scala.concurrent.impl.Promise.transform(Promise.scala:33)

at scala.concurrent.impl.Promise.transform$(Promise.scala:31)

at scala.concurrent.impl.Promise$KeptPromise$Successful.transform(Promise.scala:379)

at scala.concurrent.Future.map(Future.scala:292)

at scala.concurrent.Future.map$(Future.scala:292)

at scala.concurrent.impl.Promise$KeptPromise$Successful.map(Promise.scala:379)

at scala.concurrent.Future$.apply(Future.scala:659)

at com.databricks.backend.daemon.driver.JupyterKernelListener$BackgroundPollTask.start(JupyterKernelListener.scala:174)

at com.databricks.backend.daemon.driver.JupyterKernelListener.<init>(JupyterKernelListener.scala:340)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.$anonfun$startPython$1(JupyterDriverLocal.scala:708)

at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

at scala.util.Try$.apply(Try.scala:213)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.com$databricks$backend$daemon$driver$JupyterDriverLocal$$withRetry(JupyterDriverLocal.scala:663)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.startPython(JupyterDriverLocal.scala:680)

at com.databricks.backend.daemon.driver.JupyterDriverLocal.<init>(JupyterDriverLocal.scala:403)

at com.databricks.backend.daemon.driver.PythonDriverWrapper.instantiateDriver(DriverWrapper.scala:781)

at com.databricks.backend.daemon.driver.DriverWrapper.setupRepl(DriverWrapper.scala:350)

at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:246)

at java.lang.Thread.run(Thread.java:750)

2 REPLIES 2

jose_gonzalez
Moderator
Moderator

please check the driver's logs, for example the log4j and the GC logs

JKR
New Contributor III

@Jose Gonzalez​  Where can I find GC logs ? and what specifically I look for in log4j and GC logs ?

I want to understand why my driver is consuming that much RAM resources when jobs executed it must free the memory itself and let the other jobs use that memory.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.