Hi team,
We are using job cluster with node type 128G memory+16cores for a workflow. From document we know one worker is one node and is one executor. From Spark UI env tab we can see the spark.executor.memory is 24G, and from metrics we can see the memory usage for worker seems to be capped under 48G instead of using full memory. We don't specify spark configs in the cluster. Also, we cannot see spark.executor.cores shown in env tab. Can you please help to explain how Databricks allocates the resource to executor with a node resource?
Thanks.