Hello,
We are submitting jobs to the data bricks cluster using /api/2.0/jobs/create this API and running a spark java application (jar that is submitted to this API). We are noticing Java application is executing as expected. however, we see that the status of the job in Databricks is shown as failed at the end.
Can you please help us in resolving this?
We are getting following error in the log:
24/01/24 08:36:01 INFO SparkContext: Successfully stopped SparkContext
24/01/24 08:36:01 INFO ProgressReporter$: Removed result fetcher for 645810760151386822_6024284017790236785_job-734546952940362-run-453202441019294-action-3199680969853150
24/01/24 08:36:01 WARN ScalaDriverWrapper: Spark is detected to be down after running a command
24/01/24 08:36:01 WARN ScalaDriverWrapper: Fatal exception (spark down) in ReplId-8f661-76f6f-2cac6
com.databricks.backend.common.rpc.SparkStoppedException: Spark down:
ââat com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:651)
ââat com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:744)
ââat com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:520)
ââat com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:436)
ââat com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:279)
ââat java.lang.Thread.run(Thread.java:750)
24/01/24 08:36:03 INFO DrainingState: Started draining: min wait 10000, grace period 5000, max wait 15000.
24/01/24 08:36:05 WARN DriverDaemon: Unexpected exception: java.lang.NullPointerException
java.lang.NullPointerException
Also attaching the Snapshot.
Thank you.