Hi team,
Seems in Databricks, instead of like running Spark jobs on a k8s cluster, when a workflow running on a Job Compute/Cluster or instance pool, one physical node can only have one executor. Is this understanding right? If that is true, that means if I create a Job Cluster for my workflow with high-end instance type, I need to config the executor with bigger values? For example, if I specify Node type as 122GB+16core, as one node runs one executor the normal config on k8s like
spark.executor.memory 16gspark.executor.cores 4
will incur a big waste right?
Thanks