cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks spark_jar_task failed when submitted via API

Nisha2
New Contributor II

Hello,
We are submitting jobs to the data bricks cluster using  /api/2.0/jobs/create this API and running a spark java application (jar that is submitted to this API). We are noticing Java application is executing as expected. however, we see that the status of the job in Databricks is shown as failed at the end.

Can you please help us in resolving this?

We are getting following error in the log: 

24/01/24 08:36:01 INFO SparkContext: Successfully stopped SparkContext
24/01/24 08:36:01 INFO ProgressReporter$: Removed result fetcher for 645810760151386822_6024284017790236785_job-734546952940362-run-453202441019294-action-3199680969853150
24/01/24 08:36:01 WARN ScalaDriverWrapper: Spark is detected to be down after running a command
24/01/24 08:36:01 WARN ScalaDriverWrapper: Fatal exception (spark down) in ReplId-8f661-76f6f-2cac6
com.databricks.backend.common.rpc.SparkStoppedException: Spark down: 
  at com.databricks.backend.daemon.driver.DriverWrapper.executeCommandAndGetError(DriverWrapper.scala:651)
  at com.databricks.backend.daemon.driver.DriverWrapper.executeCommand(DriverWrapper.scala:744)
  at com.databricks.backend.daemon.driver.DriverWrapper.runInnerLoop(DriverWrapper.scala:520)
  at com.databricks.backend.daemon.driver.DriverWrapper.runInner(DriverWrapper.scala:436)
  at com.databricks.backend.daemon.driver.DriverWrapper.run(DriverWrapper.scala:279)
  at java.lang.Thread.run(Thread.java:750)
24/01/24 08:36:03 INFO DrainingState: Started draining: min wait 10000, grace period 5000, max wait 15000.
24/01/24 08:36:05 WARN DriverDaemon: Unexpected exception: java.lang.NullPointerException
java.lang.NullPointerException
 


Also attaching the Snapshot.

Thank you.

1 REPLY 1

Nisha2
New Contributor II

Hello @Retired_mod,
Thank you for the reply, after analyzing my code. I got to know that I was trying to create SparkSession inside the try block which automatically terminates the session when the process is finished. So for now I have removed try block. But now I'm facing another issue. Providing the error log below:

24/03/18 09:41:55 ERROR MicroBatchExecution: Query [id = 88d62783-1d5d-4a7e-b75c-875fa5d42892, runId = 24ec7106-189a-49f8-88ec-b4cb213d1496] terminated with error
com.databricks.sql.transaction.tahoe.DeltaAnalysisException: Incompatible format detected.

You are trying to write to `abfss://pdpdeltalake@stpdpdeltalakepoc.dfs.core.windows.net/test/di/sap-cic-plant-plant/` using Delta, but there is no
transaction log present. Check the upstream job to make sure that it is writing
using format("delta") and that you are trying to write to the table base path.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group