When I run notebooks from within a notebook using `dbutils.notebook.run`, I see a nice progress table that updates automatically, showing the execution time, the status, links to the notebook and it is seamless.
My goal now is to execute many notebooks or Python scripts in parallel and have Spark figure out when to run what and what resources to use. Essentially, I want to create a job with a dynamic number of parallel tasks and submit that. I can do that using the submit method of the WorkspaceClient jobs API. That works exactly as intended, however, the results are just a linear collection of prints to the standard output. I want a nice progress table such as the one I get when using `dbutils.notebook.run`. This is very convenient for navigating to failed notebooks/scripts and having a clear overview of all the task results in that job-run.
Do any of you know how this progress table is generated, and can I reproduce that in my own processes and loops?
This is the table I want to see:
progress table