cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Is it possible to get Job Run ID of notebook run by dbutils.notbook.run?

hanspetter
New Contributor III

When running a notebook using dbutils.notebook.run from a master-notebook, an url to that running notebook is printed, i.e.:

Notebook job #223150

Notebook job #223151

Are there any ways to capture that Job Run ID (#223150 or #223151)? We have 50 or so notebooks that runs in parallell, and if one of them fails it would be nice to see the actual run of the notebook without clicking every url to find the correct one.

Thanks 🙂

19 REPLIES 19

Thank you very mush for the share !

dbutils.notebook.entry_point is even not given by the official doc:

https://docs.databricks.com/dev-tools/databricks-utils.html#notebook-utility-dbutilsnotebook

415963
New Contributor II

Is there also a way to achieve this, when the notebook does not successfully complete. E.g.: when a structured streaming query fails.

wahanand
New Contributor II

Spark config value for "spark.databricks.clusterUsageTags.clusterAllTags" has Json which contains jobId,runId,clusterId... etc

The below article will help you.

https://community.databricks.com/s/feed/0D53f00001HKHjxCAH

DungTran
New Contributor II

I would like to add that this works too:

dbutils.notebook.entry_point.getDbutils().notebook().getContext().jobId().get()

Rodrigo_Mohr
New Contributor II

I know this is an old thread, but sharing what is working for me well in Python now, for retrieving the run_id as well and building the entire link to that job run:

job_id = dbutils.notebook.entry_point.getDbutils().notebook().getContext().jobId().get()
run_id = dbutils.notebook.entry_point.getDbutils().notebook().getContext().currentRunId().get().id()
workspace_url = spark.conf.get("spark.databricks.workspaceUrl")
full_run_url = f"http://{workspace_url}/jobs/{job_id}/runs/{run_id}"

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group