Hi Team Experts,
I am experiencing a high memory consumption in the other part in the memory utilization part in the metrics tab. Right now am not running any jobs but still out of 8gb driver memory 6gb is almost full by other and only 1.5 gb is the used memory. when I start running the Jobs the Driver other memory even more increasing and free space is just left with 175 mb. When this continues it start throwing "Driver not responding likely due to GC" or "Spark is running out of memory and notebook detached and it will attach later". I really suspect why driver memory is taking a lot of memory consumption and not leaving space to run the normal loads.
I understand this is for cluster maintenance activities viz.
- heart beat messages
- gc
- listening for job requests
- hosting spark ui
- monitoring resources
But still no load taking 6gb out of 8gb is too much.
I saw i can like increase the driver memory but even though i increase 8gb to 16 gb same story continues the driver's other memory is taking 12gb is taking for other can anyone tell me what's actually happening and way to mitigate this.