- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-09-2021 02:14 AM
I am currently working on determining proper cluster size for my Spark application and I have a question regarding Hadoop configuration parameter yarn.nodemanager.resource.memory-mb. From what I see, this parameter is responsible for setting the physical limit of memory available for Spark containers on the worker node running under YARN scheduler. The thing I noticed is that for the worker node of any size, this parameter is still set at 8192. This bothers me because it should imply that even for clusters where worker size is significantly larger, only 8192 MB is designated to executor memory. I have tried to override the property by setting this property via adding it to
/home/ubuntu/databricks/spark/dbconf/hadoop/core-site.xml file through cluster init script. However, even though I set it there, it looks like it is being overridden from elsewhere. So from here I want to understand:
- whether the limit that is set here really puts the limit on the amount of executor memory for the cluster
- if so, how/should it be overridden from some other place in order to properly utilize memory available on the worker node
Thanks!
- Labels:
-
Spark application
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-12-2021 04:16 PM
Hi @Andriy Shevchenkoโ ,
Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-10-2021 02:03 AM
Databricks does not use Yarn AFAIK (see this topic).
The memory allocation is handled by spark.executor.memory.
The amount of memory available for each executor is allocated within the Java Virtual Machine (JVM) memory heap.
Here is some more detail:
You can also do a test run on a cluster and then monitor the workers and driver using Ganglia, which gives you a view on what's goin on and how much memory is allocated/used.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
โ11-12-2021 04:16 PM
Hi @Andriy Shevchenkoโ ,
Databricks does not use Yarn. I recommend you to try to use Databricks community edition link to get familiar and explore. You can check Ganglia UI to see how is the cluster utilization, memory, cpu, IO, etc

