Hi everyone, I'm trying to find out if databricks has support for clusters which can scale out with more drivers to run new jobs in parallel. If not, then is there a work around for this? I've noticed that all-purpose and job compute clusters both feature only a single driver.
I'm trying to run my spark applications from a jar file passing different arguments to it on every run. I need the applications to be run in parallel and not sequentially or concurrently, this is because I have a pretty strict time constraint requirement. I also need auto-scaling support for the same reason.
I'm quite new to databricks and spark as well, would greatly appreciate anyone's input.