help undersanding RAM utilization graph
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
I am trying to understand the following graph databricks is showing me and failing:
What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizing any of the Spark worker nodes, it's just a Python script. I know that memory usage of ~138GB is real because job was failing on a 128GB driver node and seems to be happy on 256GB driver.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
Hi @meshko
The light-shaded area represents the total available RAM size. The tooltip shows it when you hover over a mouse.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
So why does the totail available RAM want to go above 128GB if the graph never gets above 90GB?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yesterday
@meshko , I think you are seeing RAM uses of a 128GB RAM instance. Is that correct? Could you confirm the instance type of your cluster node? Although the screenshot you attached in the first message seemed to have reached almost 139GB, I guess you will see about 128GB in total in the tooltip if it is a 128GB RAM instance.
I just tested a single 128GB RAM instance, and the RAM chart shows this.

