I am currently working on determining proper cluster size for my Spark application and I have a question regarding Hadoop configuration parameter yarn.nodemanager.resource.memory-mb. From what I see, this parameter is responsible for setting the physical limit of memory available for Spark containers on the worker node running under YARN scheduler. The thing I noticed is that for the worker node of any size, this parameter is still set at 8192. This bothers me because it should imply that even for clusters where worker size is significantly larger, only 8192 MB is designated to executor memory. I have tried to override the property by setting this property via adding it to
/home/ubuntu/databricks/spark/dbconf/hadoop/core-site.xml file through cluster init script. However, even though I set it there, it looks like it is being overridden from elsewhere. So from here I want to understand:
- whether the limit that is set here really puts the limit on the amount of executor memory for the cluster
- if so, how/should it be overridden from some other place in order to properly utilize memory available on the worker node
Thanks!