Why driver memory is capped
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-25-2024 11:37 PM
Hi team
We are using a job cluster to run spark with MERGE. Somehow it needs a lot driver memory. We allocate 128G+16core node for driver, and specify spark.driver.memory=96000m. We can see it is 96000m from env table of spark UI. The config is like:
"spark.driver.memory": "96000m",
"spark.memory.offHeap.size": "11872m",
"spark.executor.memory": "86000m",
however from metrics of the cluster, the driver memory is capped below 48G. How to make driver to fully use the memory?
- Labels:
-
Spark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2024 06:27 AM
Could you please try increase the partition the Dataframe by doing repartition() before you merge.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2024 06:08 PM
Thanks for response. We are doubt why driver memory cannot be fully used (only 48G out of 128G is used for driver). Is this related with repartition?

