cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

yarn.nodemanager.resource.memory-mb parameter update

Andriy_Shevchen
New Contributor

I am currently working on determining proper cluster size for my Spark application and I have a question regarding Hadoop configuration parameter yarn.nodemanager.resource.memory-mb. From what I see, this parameter is responsible for setting the physical limit of memory available for Spark containers on the worker node running under YARN scheduler. The thing I noticed is that for the worker node of any size, this parameter is still set at 8192. This bothers me because it should imply that even for clusters where worker size is significantly larger, only 8192 MB is designated to executor memory. I have tried to override the property by setting this property via adding it to 

/home/ubuntu/databricks/spark/dbconf/hadoop/core-site.xml file through cluster init script. However, even though I set it there, it looks like it is being overridden from elsewhere. So from here I want to understand:

- whether the limit that is set here really puts the limit on the amount of executor memory for the cluster

- if so, how/should it be overridden from some other place in order to properly utilize memory available on the worker node

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

jose_gonzalez
Moderator
Moderator

Hi @Andriy Shevchenkoโ€‹ ,

Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc

View solution in original post

4 REPLIES 4

Kaniz
Community Manager
Community Manager

Hi @ Scribd! My name is Kaniz, and I'm the technical moderator here. Great to meet you, and thanks for your question! Let's see if your peers in the community have an answer to your question first. Or else I will get back to you soon. Thanks.

-werners-
Esteemed Contributor III

Databricks does not use Yarn AFAIK (see this topic).

The memory allocation is handled by spark.executor.memory.

The amount of memory available for each executor is allocated within the Java Virtual Machine (JVM) memory heap.

Here is some more detail:

Azure

AWS

You can also do a test run on a cluster and then monitor the workers and driver using Ganglia, which gives you a view on what's goin on and how much memory is allocated/used.

jose_gonzalez
Moderator
Moderator

Hi @Andriy Shevchenkoโ€‹ ,

Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc

Kaniz
Community Manager
Community Manager

Hi @Andriy Shevchenkoโ€‹, Just a friendly follow-up. Do you still need help, or does the above response help you to find the solution? Please let us know.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.