<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to Programmatically Retrieve Cluster Memory Usage? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70794#M34152</link>
    <description>&lt;P&gt;Hi Alessandro,&lt;/P&gt;&lt;P&gt;Thank you for your help and suggestion!&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the second point, I’m looking to analyze the memory utilization over the duration of the job. Specifically, I want to know the average &amp;amp; total memory used during a single job run compared to the total memory available in that specific cluster - set by prior configuration. However, any additional useful metrics (like per worker) that I can access in the notebook would also be appreciated. &amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm thinking of creating a Delta table to save these statistics to. I'd like to run performance tests for specific use cases and want to see how certain metrics change with different types of clusters used for a certain amounts of records to have a baseline. Later, we plan to find a way to integrate this into our CI/CD pipeline to optionally track how much our changes could affect the baseline performance on an "approximate" level.&lt;/P&gt;</description>
    <pubDate>Mon, 27 May 2024 18:30:39 GMT</pubDate>
    <dc:creator>Akuhei05</dc:creator>
    <dc:date>2024-05-27T18:30:39Z</dc:date>
    <item>
      <title>How to Programmatically Retrieve Cluster Memory Usage?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70762#M34142</link>
      <description>&lt;P&gt;Hi!&lt;/P&gt;&lt;P&gt;I need help with the following:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Programmatically retrieve the maximum memory configured for the cluster attached to the notebook/job - I think this is achievable through the system tables or Clusters API, but I'm open to other suggestions&lt;/LI&gt;&lt;LI&gt;Execute a job on this cluster and, upon its completion, determine the amount of memory utilized during the job and get this information programmatically inside a simple notebook - Note: GangliaUI is out of question, we are using LTS 13.3. We also have a Spark-based Listener implemented, the logs are ingested to ADX. However, I haven't found a metric like this.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Could you provide guidance so that I can create a Delta table that includes these statistics?&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Mon, 27 May 2024 16:17:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70762#M34142</guid>
      <dc:creator>Akuhei05</dc:creator>
      <dc:date>2024-05-27T16:17:50Z</dc:date>
    </item>
    <item>
      <title>Re: How to Programmatically Retrieve Cluster Memory Usage?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70782#M34145</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/105797"&gt;@Akuhei05&lt;/a&gt;&amp;nbsp;how are you?&lt;/P&gt;
&lt;P&gt;For the first topic, you can create a cell on your notebook that gets the spark configuration for max memory every time it is ran, relating to your cluster attached. For this, please see below:&lt;/P&gt;
&lt;LI-CODE lang="python"&gt;spark_memory = spark.sparkContext.getConf().get('spark.executor.memory')
print(spark_memory)&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;For the second point, when you say "&lt;SPAN&gt;determine the amount of memory utilized during the job" is it related to the maximum used in total? per worker? is it the sum of it?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Best,&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Alessandro&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 27 May 2024 18:07:44 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70782#M34145</guid>
      <dc:creator>anardinelli</dc:creator>
      <dc:date>2024-05-27T18:07:44Z</dc:date>
    </item>
    <item>
      <title>Re: How to Programmatically Retrieve Cluster Memory Usage?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70794#M34152</link>
      <description>&lt;P&gt;Hi Alessandro,&lt;/P&gt;&lt;P&gt;Thank you for your help and suggestion!&amp;nbsp;&lt;/P&gt;&lt;P&gt;For the second point, I’m looking to analyze the memory utilization over the duration of the job. Specifically, I want to know the average &amp;amp; total memory used during a single job run compared to the total memory available in that specific cluster - set by prior configuration. However, any additional useful metrics (like per worker) that I can access in the notebook would also be appreciated. &amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm thinking of creating a Delta table to save these statistics to. I'd like to run performance tests for specific use cases and want to see how certain metrics change with different types of clusters used for a certain amounts of records to have a baseline. Later, we plan to find a way to integrate this into our CI/CD pipeline to optionally track how much our changes could affect the baseline performance on an "approximate" level.&lt;/P&gt;</description>
      <pubDate>Mon, 27 May 2024 18:30:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70794#M34152</guid>
      <dc:creator>Akuhei05</dc:creator>
      <dc:date>2024-05-27T18:30:39Z</dc:date>
    </item>
    <item>
      <title>Re: How to Programmatically Retrieve Cluster Memory Usage?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70800#M34156</link>
      <description>&lt;P&gt;Great use case!&lt;/P&gt;
&lt;P&gt;Have you ever heard about Prometheus with Spark 3.0? Its a tool that can export live metrics for your jobs and runs which writes to a sink where you can read with a stream. I've personally never used in such use case, but there you can monitor every metric and write it out to then create some insights of it (such as averages and totals) on a different pipeline, which then can finally become a table.&lt;/P&gt;
&lt;P&gt;To better understand, you can check these links below:&lt;/P&gt;
&lt;P&gt;1. Session on how to use and enable Prometheus in databricks:&amp;nbsp;&lt;A href="https://www.youtube.com/watch?v=FDzm3MiSfiE" target="_blank"&gt;https://www.youtube.com/watch?v=FDzm3MiSfiE&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;2. Spark official guide:&amp;nbsp;&lt;A href="https://spark.apache.org/docs/3.1.1/monitoring.html" target="_blank"&gt;https://spark.apache.org/docs/3.1.1/monitoring.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;Best,&lt;/P&gt;
&lt;P&gt;Alessandro&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 27 May 2024 18:45:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-programmatically-retrieve-cluster-memory-usage/m-p/70800#M34156</guid>
      <dc:creator>anardinelli</dc:creator>
      <dc:date>2024-05-27T18:45:11Z</dc:date>
    </item>
  </channel>
</rss>

