Hey @ayushmangal72 , try using the Databricks Job Run API (/api/2.2/jobs/runs/list) to fetch older run IDs for the job.
Once you have the run_id, make a request to the API at /api/2.2/jobs/runs/get. You'll be able to find the DBR version in the API response (Iโve provided a sample response where the DBR version can be found).
"job_clusters": [
{
"job_cluster_key": "auto_scaling_cluster",
"new_cluster": {
"autoscale": {
"max_workers": 16,
"min_workers": 2
},
"node_type_id": null,
"spark_conf": {
"spark.speculation": true
},
"spark_version": "7.3.x-scala2.12"
}
}
],