Hi team
We are using a job cluster to run spark with MERGE. Somehow it needs a lot driver memory. We allocate 128G+16core node for driver, and specify spark.driver.memory=96000m. We can see it is 96000m from env table of spark UI. The config is like:
"spark.driver.memory": "96000m",
"spark.memory.offHeap.size": "11872m",
"spark.executor.memory": "86000m",
however from metrics of the cluster, the driver memory is capped below 48G. How to make driver to fully use the memory?