Hi,
We have two workspaces on Databricks, prod and dev. On prod, if we create a new all-purpose cluster through the web interface and go to Environment in the the spark UI, the spark.master setting is correctly set to be the host IP. This results in a cluster that is running in standalone mode.
However, we have a very similar workspace, dev, where if we create a new all-purpose cluster in exactly the same way, spark.master is set to be local[*], which means the cluster is running in local mode and does not try to make use of executors at all! There are no settings being overridden or defined differently in the cluster creation process that we are aware of!
Is there some spark configuration somewhere on the workspace or account level that we need to change in order for a new all-purpose cluster not to default to local mode?
Thanks in advance!