Hi @hgintexas ,
Youโre right, the system job timeline tables and the Runs API donโt currently surface the resolved perโiteration inputs for a For-each task when those inputs are sourced via task values set in another notebook with dbutils.jobs.taskValues.set().
The only place Databricks explicitly documents showing task values for a for-each run is the Task run details UI โOutputโ panel, which isnโt exposed by the Jobs Get Run Output endpoint. There is no separate API that exposes the same rendered output for aggregation across runs.
A potential workaround: in the upstream task (the one calling dbutils.jobs.taskValues.set), also write the key/value you set to a small Delta table you control, along with identifiers you can later join on (job_id, job_run_id, task_key, and any iteration id or logical key you use) with system.lakeflow.job_task_run_timeline or with system.lakeflow.job_run_timeline. While it isn't the simplest solution, I think it would achieve what you are looking for.