Direct UI Replication Is Not Supported Natively:
Databricks does not currently publish a public API or widget to embed the same progress table in notebooks for arbitrary parallel tasks, scripts, or jobs launched via WorkspaceClient Jobs API .
Workarounds and Alternatives
1. Use Job Task API + Output Tracking
-
You can collect status, results, and links via the Jobs API: Collect each taskโs run ID, status, and notebook/script path, then use Python to poll for status updates.
-
Display this as a custom Markdown or HTML table in your notebook, but links won't be "magic"โthey require manual formatting.
-
Example skeleton:
# For illustration
for task in job_tasks:
print(f"| {task['name']} | {task['status']} | {task['runtime']} | [Link]({get_databricks_url(task['run_id'])}) |")
This is manual: you need to loop through run info, fetch status via API, and build the table .
2. Databricks Widgets with Polling
-
Use notebook widgets (dbutils.widgets) inside a master tracking notebook. Have each child task update its widget status, which the master notebook then polls and formats for display .
-
You must program status updates and presentation logic manually.
3. Job Results Table Notebook
-
After job completion, create a summary notebook to call the Jobs API, collect statuses, durations, and notebook/script links, and format them as a Markdown/HTML table.
-
Provide clickable links to failed/completed runs using run IDs.
Example: Pseudo-Progress Table
| Task Name |
Status |
Time (s) |
Link |
| Notebook A |
Success |
23 |
Open |
| Script B |
Failed |
58 |
Open |
You build this by querying job run info (via Jobs API), then render it in Markdown or HTML so it can display in a notebook. For dynamic updates, you'd need to use refreshable widgets or periodic polling .
Summary
-
The progress table feature in Databricks notebooks is not publicly available as an API or widget for custom jobs or scripts .
-
You can build similar tables manually by polling job/task status from the Jobs API and formatting it in Markdown/HTML within your notebook .
-
Direct live update and interactive linking works cleanly only with dbutils.notebook.run inside Databricks UI; custom approaches require more engineering effort and will have some limitations.
If your main goal is navigation and monitoring convenience, focus on collecting run IDs and statuses, poll the API, and format a summary table in your master notebook with links to the relevant outputs .