<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic API monitoring of All-purpose clusters in MVP Articles</title>
    <link>https://community.databricks.com/t5/mvp-articles/api-monitoring-of-all-purpose-clusters/m-p/146524#M55</link>
    <description>&lt;DIV class=""&gt;A lot of Databricks spend isn’t “compute” at all — it’s &lt;SPAN class=""&gt;paid idle time&lt;/SPAN&gt; on all‑purpose clusters while they sit around waiting for &lt;SPAN class=""&gt;Auto Termination&lt;/SPAN&gt;.&amp;nbsp;Databricks UI is great at showing &lt;SPAN class=""&gt;starting/running/terminating&lt;/SPAN&gt;, but it often hides the key operational question:&lt;/DIV&gt;&lt;UL class=""&gt;&lt;LI&gt;&lt;SPAN class=""&gt;Is this cluster actually doing work right now, or just burning time until shutdown?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;Which scheduled jobs are running on an all‑purpose cluster (and when)?&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV class=""&gt;A simple case from my article:&lt;/DIV&gt;&lt;OL class=""&gt;&lt;LI&gt;The job finishes in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;6m 12s&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;The cluster then stays up for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~30 more minutes&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;due to the termination timeout&lt;/LI&gt;&lt;LI&gt;You pay for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~36 minutes total&lt;/SPAN&gt;, where&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~30 minutes is pure idle,&lt;/SPAN&gt;&amp;nbsp;and this is easiest to miss during off-hours/night runs.&lt;/LI&gt;&lt;/OL&gt;&lt;DIV class=""&gt;With the same assumptions, my numbers showed a &lt;SPAN class=""&gt;job cluster can be up to 12.5× cheaper&lt;/SPAN&gt;, largely because it avoids that expensive “waiting window”.&lt;/DIV&gt;&lt;DIV class=""&gt;I wrote up the approach and built a more visual monitoring view to spot these leaks fast and fix them via settings or by choosing the right cluster type.&lt;/DIV&gt;</description>
    <pubDate>Mon, 02 Feb 2026 11:07:36 GMT</pubDate>
    <dc:creator>protmaks</dc:creator>
    <dc:date>2026-02-02T11:07:36Z</dc:date>
    <item>
      <title>API monitoring of All-purpose clusters</title>
      <link>https://community.databricks.com/t5/mvp-articles/api-monitoring-of-all-purpose-clusters/m-p/146524#M55</link>
      <description>&lt;DIV class=""&gt;A lot of Databricks spend isn’t “compute” at all — it’s &lt;SPAN class=""&gt;paid idle time&lt;/SPAN&gt; on all‑purpose clusters while they sit around waiting for &lt;SPAN class=""&gt;Auto Termination&lt;/SPAN&gt;.&amp;nbsp;Databricks UI is great at showing &lt;SPAN class=""&gt;starting/running/terminating&lt;/SPAN&gt;, but it often hides the key operational question:&lt;/DIV&gt;&lt;UL class=""&gt;&lt;LI&gt;&lt;SPAN class=""&gt;Is this cluster actually doing work right now, or just burning time until shutdown?&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;Which scheduled jobs are running on an all‑purpose cluster (and when)?&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV class=""&gt;A simple case from my article:&lt;/DIV&gt;&lt;OL class=""&gt;&lt;LI&gt;The job finishes in&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;6m 12s&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;The cluster then stays up for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~30 more minutes&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;due to the termination timeout&lt;/LI&gt;&lt;LI&gt;You pay for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~36 minutes total&lt;/SPAN&gt;, where&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;~30 minutes is pure idle,&lt;/SPAN&gt;&amp;nbsp;and this is easiest to miss during off-hours/night runs.&lt;/LI&gt;&lt;/OL&gt;&lt;DIV class=""&gt;With the same assumptions, my numbers showed a &lt;SPAN class=""&gt;job cluster can be up to 12.5× cheaper&lt;/SPAN&gt;, largely because it avoids that expensive “waiting window”.&lt;/DIV&gt;&lt;DIV class=""&gt;I wrote up the approach and built a more visual monitoring view to spot these leaks fast and fix them via settings or by choosing the right cluster type.&lt;/DIV&gt;</description>
      <pubDate>Mon, 02 Feb 2026 11:07:36 GMT</pubDate>
      <guid>https://community.databricks.com/t5/mvp-articles/api-monitoring-of-all-purpose-clusters/m-p/146524#M55</guid>
      <dc:creator>protmaks</dc:creator>
      <dc:date>2026-02-02T11:07:36Z</dc:date>
    </item>
  </channel>
</rss>

