cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

yarn.nodemanager.resource.memory-mb parameter update

Andriy_Shevchen
New Contributor

I am currently working on determining proper cluster size for my Spark application and I have a question regarding Hadoop configuration parameter yarn.nodemanager.resource.memory-mb. From what I see, this parameter is responsible for setting the physical limit of memory available for Spark containers on the worker node running under YARN scheduler. The thing I noticed is that for the worker node of any size, this parameter is still set at 8192. This bothers me because it should imply that even for clusters where worker size is significantly larger, only 8192 MB is designated to executor memory. I have tried to override the property by setting this property via adding it to 

/home/ubuntu/databricks/spark/dbconf/hadoop/core-site.xml file through cluster init script. However, even though I set it there, it looks like it is being overridden from elsewhere. So from here I want to understand:

- whether the limit that is set here really puts the limit on the amount of executor memory for the cluster

- if so, how/should it be overridden from some other place in order to properly utilize memory available on the worker node

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Andriy Shevchenkoโ€‹ ,

Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc

View solution in original post

2 REPLIES 2

-werners-
Esteemed Contributor III

Databricks does not use Yarn AFAIK (see this topic).

The memory allocation is handled by spark.executor.memory.

The amount of memory available for each executor is allocated within the Java Virtual Machine (JVM) memory heap.

Here is some more detail:

Azure

AWS

You can also do a test run on a cluster and then monitor the workers and driver using Ganglia, which gives you a view on what's goin on and how much memory is allocated/used.

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Andriy Shevchenkoโ€‹ ,

Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group