I have created a workflow job in databricks with job parameters.
I want to run the job same with different workloads and data volume.
So I want the compute cluster to be parametrized so that I can pass the compute requirements(driver, executor size and number of nodes) dynamically when I run the job.
Is this possible in databricks?