cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Retrieve job id and run id from scala

Sunny
New Contributor III

I need to retrieve job id and run id of the job from a jar file in Scala.

When I try to compile below code in IntelliJ, below error is shown.

import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
 
object MainSNL {
 
  @throws(classOf[Exception])
  def main(args: Array[String]): Unit = {
 
  dbutils.notebook.getContext.tags("jobId").toString()
   dbutils.notebook.getContext.tags("runId").toString()
 
}
 
}

error: Symbol 'type com.databricks.backend.common.rpc.CommandContext' is missing from the classpath.

[ERROR] This symbol is required by 'method com.databricks.dbutils_v1.NotebookUtils.getContext'.

[ERROR] Make sure that type CommandContext is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.

[ERROR] A full rebuild may help if 'NotebookUtils.class' was compiled against an incompatible version of com.databricks.backend.common.rpc.

[ERROR]   dbutils.notebook.getContext.tags("jobId").toString()

[ERROR]   ^

[ERROR] one error found

1 ACCEPTED SOLUTION

Accepted Solutions

Sunny
New Contributor III

This is resolved by passing {{job_id}} and {{run_id}} as parameters to the jar

View solution in original post

8 REPLIES 8

Kaniz
Community Manager
Community Manager

Hi @Sundeep P​, We have a similar discussion on this community thread here. Please let us know if it helps.

Sunny
New Contributor III
This is the same code I've used but receiving compilation error in IntelliJ.

-werners-
Esteemed Contributor III

Do you use databricks-connect with Intellij?

Sunny
New Contributor III

No

Sunny
New Contributor III

Hi @Kaniz Fatma​  Are there any alternatives to this, please advise

ron_defreitas
Contributor

Another method of getting access to this data is through the spark context, and this will not require use of the dbutils library.

If you check the environment tab of the spark cluster UI, you'll see that Databricks adds a number of properties to the spark config that can be easily retrieved using the native spark apis.

An example of the value of spark.databricks.clusterUsageTags.clusterName is "job-12345678901-run-987654-default" and you can retrieve this using spark.conf.get()

Sunny
New Contributor III

This is resolved by passing {{job_id}} and {{run_id}} as parameters to the jar

Mohit_m
Valued Contributor II

Maybe its worth going through the Task Parameter variables section of the below doc

https://docs.databricks.com/data-engineering/jobs/jobs.html#task-parameter-variables

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.