cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How to programmatically get the Spark Job ID of a running Spark Task?

FRG96
New Contributor III

In Spark we can get the Spark Application ID inside the Task programmatically using:

SparkEnv.get.blockManager.conf.getAppId

and we can get the Stage ID and Task Attempt ID of the running Task using:

TaskContext.get.stageId
TaskContext.get.taskAttemptId

Is there any way to get the Spark Job Id that is associated with a running Task (preferably using TaskContext or SparkEnv)?

Linked Question on StackOverflow: https://stackoverflow.com/questions/70929032/how-to-programmatically-get-the-spark-job-id-of-a-runni...

1 ACCEPTED SOLUTION

Accepted Solutions

Dan_Z
Databricks Employee
Databricks Employee

@Franklin Georgeโ€‹ , Honestly, there is no easy way to do this. Your only option is to set up cluster log delivery, which will give you access to the cluster's event log file. This event log file is JSON and contains all of the info that the SparkUI uses (and more). It will have the information you are looking for but is not trivial to parse manually. I can't think of a better option.

View solution in original post

4 REPLIES 4

User16763506477
Contributor III

Hi @Franklin Georgeโ€‹  , As mentioned on stackoverflow also, jobIdToStageIds mapping is store in spark context (DagScheduler) . So I don't think it is possible to get this info at the executor level while the task is running.

May I know what you want to do with jobId at the task level? What is the use case here?

FRG96
New Contributor III

Hi @Gaurav Rupnarโ€‹ , I have Spark SQL UDFs (implemented as Scala methods) in which I want to get the details of the Spark SQL query that called the UDF, especially a unique query ID, which in SparkSQL is the Spark Job ID. That's why I wanted a way to detect the Job ID from the UDF code itself when it is executed on the Executors as Tasks.

A logic in my UDF requires this unique query id (Job ID) to enforce that the UDF execution(s) will be consistent for each SparkSQL query.

Dan_Z
Databricks Employee
Databricks Employee

@Franklin Georgeโ€‹ , Honestly, there is no easy way to do this. Your only option is to set up cluster log delivery, which will give you access to the cluster's event log file. This event log file is JSON and contains all of the info that the SparkUI uses (and more). It will have the information you are looking for but is not trivial to parse manually. I can't think of a better option.

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Franklin Georgeโ€‹,

Just a friendly follow-up. Do you still need hep or any other responses provided help you to resolve your issue? Please et us know.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group