cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Other memory of the driver is high even in a newly spun cluster

Rubini_MJ
New Contributor

Hi Team Experts,

    I am experiencing a high memory consumption in the other part in the memory utilization part in the metrics tab. Right now am not running any jobs but still out of 8gb driver memory 6gb is almost full by other and only 1.5 gb is the used memory. when I start running the Jobs the Driver other memory even more increasing and free space is just left with  175 mb. When this continues it start throwing "Driver not responding likely due to GC" or "Spark is running out of memory and notebook detached and it will attach later". I really suspect why driver memory is taking a lot of memory consumption and not leaving space to run the normal loads.

I understand this is for cluster maintenance activities viz.

  • heart beat messages
  • gc
  • listening for job requests
  • hosting spark ui
  • monitoring resources

    But still no load taking 6gb out of 8gb is too much. 

    I saw i can like increase the driver memory but even though i increase 8gb to 16 gb same story continues the driver's other memory is taking 12gb is taking for other can anyone tell me what's actually happening and way to mitigate this.

1 ACCEPTED SOLUTION

Accepted Solutions

User16539034020
Databricks Employee
Databricks Employee

Hello, 

Thanks for contacting Databricks Support. 

Seems you are concern with high memory consumption in the "other" category in the driver node of a Spark cluster. As there are no logs/detail information provided, I only can address several potential causes:

  1. Memory leaks, which can gradually consume memory. These are often due to bugs or inefficient memory management in the code.
  2. As you mentioned, activities like heartbeat messages, GC, listening for job requests, hosting the Spark UI, and monitoring resources do consume memory, but it's unusual for them to take up such a large proportion.
  3. Inefficient Garbage Collection. If the GC is not configured properly or is inefficient, it might not be freeing up memory as expected.
  4. Storing extensive data in the Spark UI can also consume considerable memory.

Based on above analysis, please try with following mitigation strategies:

  1. Please refer to the below documentation for tuning Garbage collection. We can try the G1GC garbage collector with -XX:+UseG1GC.
    https://spark.apache.org/docs/latest/tuning.html#garbage-collection-tuning
    https://www.databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.htm...
  2. Utilize monitoring tools to get a more detailed view of memory usage. This can help identify specific areas where memory usage is abnormally high.
  3. Review the driver logs and stack traces for any anomalies or repeated patterns that could indicate the source of the memory usage.

View solution in original post

1 REPLY 1

User16539034020
Databricks Employee
Databricks Employee

Hello, 

Thanks for contacting Databricks Support. 

Seems you are concern with high memory consumption in the "other" category in the driver node of a Spark cluster. As there are no logs/detail information provided, I only can address several potential causes:

  1. Memory leaks, which can gradually consume memory. These are often due to bugs or inefficient memory management in the code.
  2. As you mentioned, activities like heartbeat messages, GC, listening for job requests, hosting the Spark UI, and monitoring resources do consume memory, but it's unusual for them to take up such a large proportion.
  3. Inefficient Garbage Collection. If the GC is not configured properly or is inefficient, it might not be freeing up memory as expected.
  4. Storing extensive data in the Spark UI can also consume considerable memory.

Based on above analysis, please try with following mitigation strategies:

  1. Please refer to the below documentation for tuning Garbage collection. We can try the G1GC garbage collector with -XX:+UseG1GC.
    https://spark.apache.org/docs/latest/tuning.html#garbage-collection-tuning
    https://www.databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.htm...
  2. Utilize monitoring tools to get a more detailed view of memory usage. This can help identify specific areas where memory usage is abnormally high.
  3. Review the driver logs and stack traces for any anomalies or repeated patterns that could indicate the source of the memory usage.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group