Executor memory increase limitation based on node type

938452
New Contributor III

Hi Databricks community,

I'm using Databricks Jobs Cluster to run some jobs. I'm setting the worker and driver type to AWS m6gd.large, which has 2 cores and 8G of memory each.

After seeing it's defaulting executor memory to 2G, I wanted to increase it, setting "spark.executor.memory 6g" on spark config on cluster setup. Upon setting it, it says I can't set it to such number, indicating the max value I can do is 2G (see the attachment). Given the worker has 8G memory, why is it limited to only 2G ? Similar situation for large worker types, the limit seems to be much lower than what should be available.