How databricks assign memory and cores
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-24-2024 03:43 PM
Hi team,
We are using job cluster with node type 128G memory+16cores for a workflow. From document we know one worker is one node and is one executor. From Spark UI env tab we can see the spark.executor.memory is 24G, and from metrics we can see the memory usage for worker seems to be capped under 48G instead of using full memory. We don't specify spark configs in the cluster. Also, we cannot see spark.executor.cores shown in env tab. Can you please help to explain how Databricks allocates the resource to executor with a node resource?
Thanks.
- Labels:
-
Spark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-01-2024 07:41 AM
Databricks allocates resources to executors on a node based on several factors, and it appears that your cluster configuration is using default settings since no specific Spark configurations were provided.
-
Executor Memory Allocation:
- The
spark.executor.memory
setting you observed (24G) is the amount of memory allocated to each executor. This is typically a fraction of the total node memory to ensure there is enough overhead for the operating system and other processes. - The observed memory usage cap of 48G per worker node suggests that there might be two executors per node, each using 24G of memory.
- The
-
Executor Cores:
- The absence of
spark.executor.cores
in the Spark UI environment tab indicates that the default configuration is being used. By default, Databricks assigns one executor per worker node, and the number of cores per executor is determined by the total number of cores available on the node divided by the number of executors. - For a node with 16 cores, if there are two executors, each executor would typically use 8 cores.
- The absence of
-
Resource Allocation:
- Databricks runs one executor per worker node by default, but this can be adjusted by specifying the
spark.executor.instances
andspark.executor.cores
configurations. - The total memory available to executors on a node is less than the node's total memory to leave room for system processes and Spark's overhead.
- Databricks runs one executor per worker node by default, but this can be adjusted by specifying the

