cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

How databricks assign memory and cores

Brad
Contributor

Hi team,

We are using job cluster with node type 128G memory+16cores for a workflow. From document we know one worker is one node and is one executor. From Spark UI env tab we can see the spark.executor.memory is 24G, and from metrics we can see the memory usage for worker seems to be capped under 48G instead of using full memory. We don't specify spark configs in the cluster. Also, we cannot see spark.executor.cores shown in env tab. Can you please help to explain how Databricks allocates the resource to executor with a node resource?

Thanks.

1 REPLY 1

Walter_C
Honored Contributor

Databricks allocates resources to executors on a node based on several factors, and it appears that your cluster configuration is using default settings since no specific Spark configurations were provided.

  1. Executor Memory Allocation:

    • The spark.executor.memory setting you observed (24G) is the amount of memory allocated to each executor. This is typically a fraction of the total node memory to ensure there is enough overhead for the operating system and other processes.
    • The observed memory usage cap of 48G per worker node suggests that there might be two executors per node, each using 24G of memory.
  2. Executor Cores:

    • The absence of spark.executor.cores in the Spark UI environment tab indicates that the default configuration is being used. By default, Databricks assigns one executor per worker node, and the number of cores per executor is determined by the total number of cores available on the node divided by the number of executors.
    • For a node with 16 cores, if there are two executors, each executor would typically use 8 cores.
  3. Resource Allocation:

    • Databricks runs one executor per worker node by default, but this can be adjusted by specifying the spark.executor.instances and spark.executor.cores configurations.
    • The total memory available to executors on a node is less than the node's total memory to leave room for system processes and Spark's overhead.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group