This is the current code(ignore indentations) that I am using which takes the list of all the running jobs and then filters from the list to get the run id of the matching job name. I want to know if there is any better way to optimise this.
Legacy databricks cli being used, 0.17.8
cmd = ["databricks", "runs", "list", "--output", "json"]
output = subprocess.run(cmd, capture_output=True) # noqa: S607,S603
stdout = output.stdout.decode("utf-8")
runs = json.loads(stdout)
run_name = submit_body["run_name"]
spark_python_task = submit_body["spark_python_task"]
matching_run = None
for _run in runs["runs"]:
if _run["run_name"] == run_name and _run["task"]["spark_python_task"] == spark_python_task:
matching_run = _run
break