I know that Databricks runs one executor per worker node.
Can i change the no.of.exectors by adding params (spark.executor.instances) in the cluster advance option? and also can i pass this parameter when i schedule a task, so that particular task will run with that configuration?
Is it advisable to modify the executor params like the no.of.instances and memory of executors or we need to go with the default one which is taken care by databricks internally.