cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Retrieve job id and run id from scala

Sunny
New Contributor III

I need to retrieve job id and run id of the job from a jar file in Scala.

When I try to compile below code in IntelliJ, below error is shown.

import com.databricks.dbutils_v1.DBUtilsHolder.dbutils
 
object MainSNL {
 
  @throws(classOf[Exception])
  def main(args: Array[String]): Unit = {
 
  dbutils.notebook.getContext.tags("jobId").toString()
   dbutils.notebook.getContext.tags("runId").toString()
 
}
 
}

error: Symbol 'type com.databricks.backend.common.rpc.CommandContext' is missing from the classpath.

[ERROR] This symbol is required by 'method com.databricks.dbutils_v1.NotebookUtils.getContext'.

[ERROR] Make sure that type CommandContext is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.

[ERROR] A full rebuild may help if 'NotebookUtils.class' was compiled against an incompatible version of com.databricks.backend.common.rpc.

[ERROR]   dbutils.notebook.getContext.tags("jobId").toString()

[ERROR]   ^

[ERROR] one error found

1 ACCEPTED SOLUTION

Accepted Solutions

Sunny
New Contributor III

This is resolved by passing {{job_id}} and {{run_id}} as parameters to the jar

View solution in original post

7 REPLIES 7

Sunny
New Contributor III
This is the same code I've used but receiving compilation error in IntelliJ.

-werners-
Esteemed Contributor III

Do you use databricks-connect with Intellij?

Sunny
New Contributor III

No

Sunny
New Contributor III

Hi @Kaniz Fatmaโ€‹  Are there any alternatives to this, please advise

ron_defreitas
Contributor

Another method of getting access to this data is through the spark context, and this will not require use of the dbutils library.

If you check the environment tab of the spark cluster UI, you'll see that Databricks adds a number of properties to the spark config that can be easily retrieved using the native spark apis.

An example of the value of spark.databricks.clusterUsageTags.clusterName is "job-12345678901-run-987654-default" and you can retrieve this using spark.conf.get()

Sunny
New Contributor III

This is resolved by passing {{job_id}} and {{run_id}} as parameters to the jar

Mohit_m
Valued Contributor II

Maybe its worth going through the Task Parameter variables section of the below doc

https://docs.databricks.com/data-engineering/jobs/jobs.html#task-parameter-variables

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group