Hi — good question. The cleanest way to do this is with task values, no REST API needed.
Approach: Task Values (Recommended)
In Child 1's notebook, capture its own run_id and set it as a task value:
import json
ctx = json.loads(
dbutils.notebook.entry_point.getDbutils().notebook().getContext().toJson()
)
child1_run_id = ctx["currentRunId"]["id"]
dbutils.jobs.taskValues.set(key="child1_run_id", value=str(child1_run_id))
Then in your orchestrator job, when configuring Parent 2's job parameters, reference it with:
{{tasks.Parent1.values.child1_run_id}}
Task values set inside a child job are propagated back through the run_job task, so the orchestrator can access them via {{tasks.<run_job_task_name>.values.<key>}}.
Why not {{tasks.Parent1.run_id}}?
As you noticed, {{tasks.Parent1.run_id}} gives you the orchestrator's task run_id for the runjob task itself — not the child job's internal task runid. That's why task values are the right tool here: they let the child task explicitly publish its own metadata for upstream consumption.
REST API Fallback
If you can't modify Child 1's notebook, then yes, the REST API approach works:
- Pass {{tasks.Parent1.run_id}} into an intermediate notebook task
- Use the Runs Get API to fetch the triggered child job's run details and extract Child 1's task run_id from the tasks array
But if you can add a couple of lines to Child 1, the task values approach is simpler and avoids API calls entirely.
Docs:
Hope that helps!
Anuj Lathi
Solutions Engineer @ Databricks