Executed a spark-submit job through databricks cli with the following job configurations.{
"job_id": 123,
"creator_user_name": "******",
"run_as_user_name": "******",
"run_as_owner": true,
"settings": {
"name": "44aa-8447-c123aad310",
...
Task to achieve: We have cluster id and want to fetch all runs against it.Currently I have to get all the runs, iterate through it and filter out the runs with the required cluster id.Similarly if I want to fetch all the runs that are active?
We are moving from aws emr to azure databricks. In emr we used to change executor memory with respect to job requirements. Wont we require that on databricks?