cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How to programmatically get the Spark Job ID of a running Spark Task?

FRG96
New Contributor III

In Spark we can get the Spark Application ID inside the Task programmatically using:

SparkEnv.get.blockManager.conf.getAppId

and we can get the Stage ID and Task Attempt ID of the running Task using:

TaskContext.get.stageId
TaskContext.get.taskAttemptId

Is there any way to get the Spark Job Id that is associated with a running Task (preferably using TaskContext or SparkEnv)?

Linked Question on StackOverflow: https://stackoverflow.com/questions/70929032/how-to-programmatically-get-the-spark-job-id-of-a-runni...

1 ACCEPTED SOLUTION

Accepted Solutions

Dan_Z
Honored Contributor
Honored Contributor

@Franklin George​ , Honestly, there is no easy way to do this. Your only option is to set up cluster log delivery, which will give you access to the cluster's event log file. This event log file is JSON and contains all of the info that the SparkUI uses (and more). It will have the information you are looking for but is not trivial to parse manually. I can't think of a better option.

View solution in original post

6 REPLIES 6

Kaniz
Community Manager
Community Manager

Hi @Franklin George​ ! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.

Kaniz
Community Manager
Community Manager

Hi @Franklin George​ ,

It depends on which language you are using.

Scala

https://spark.apache.org/docs/1.6.1/api/scala/index.html#org.apache.spark.SparkContext

sc.applicationId

Java

https://spark.apache.org/docs/1.6.2/api/java/org/apache/spark/api/java/JavaSparkContext.html

sparkContext.sc().applicationId();

Python

http://spark.apache.org/docs/1.6.2/api/python/pyspark.html#pyspark.SparkContext

sc.applicationId

It can also depend on the Spark version.

User16763506477
Contributor III

Hi @Franklin George​  , As mentioned on stackoverflow also, jobIdToStageIds mapping is store in spark context (DagScheduler) . So I don't think it is possible to get this info at the executor level while the task is running.

May I know what you want to do with jobId at the task level? What is the use case here?

FRG96
New Contributor III

Hi @Gaurav Rupnar​ , I have Spark SQL UDFs (implemented as Scala methods) in which I want to get the details of the Spark SQL query that called the UDF, especially a unique query ID, which in SparkSQL is the Spark Job ID. That's why I wanted a way to detect the Job ID from the UDF code itself when it is executed on the Executors as Tasks.

A logic in my UDF requires this unique query id (Job ID) to enforce that the UDF execution(s) will be consistent for each SparkSQL query.

Dan_Z
Honored Contributor
Honored Contributor

@Franklin George​ , Honestly, there is no easy way to do this. Your only option is to set up cluster log delivery, which will give you access to the cluster's event log file. This event log file is JSON and contains all of the info that the SparkUI uses (and more). It will have the information you are looking for but is not trivial to parse manually. I can't think of a better option.

Hi @Franklin George​,

Just a friendly follow-up. Do you still need hep or any other responses provided help you to resolve your issue? Please et us know.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.