<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node? in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/148651#M11450</link>
    <description>&lt;P&gt;Is the above information true for job clusters as well? Looks like the enhanced auto scalar is only available for pipelines&lt;/P&gt;</description>
    <pubDate>Tue, 17 Feb 2026 23:37:33 GMT</pubDate>
    <dc:creator>aranjan99</dc:creator>
    <dc:date>2026-02-17T23:37:33Z</dc:date>
    <item>
      <title>Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102206#M9136</link>
      <description>&lt;P class=""&gt;I’m currently working with Databricks autoscaling configurations and trying to better understand how Spark decides when to spin up additional worker nodes. My cluster has a minimum of one worker and can scale up to five. I know that tasks are assigned to cores and that if more tasks are queued than available cores, Spark may consider adding a new worker—assuming autoscaling is enabled. But what specific conditions or metrics does Spark use to trigger the autoscaling event?&lt;/P&gt;&lt;P class=""&gt;&lt;U&gt;&lt;STRONG&gt;For example:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Is it based solely on the number of pending tasks in the scheduler queue?&lt;/LI&gt;&lt;LI&gt;Does it consider task completion times, memory usage, or CPU utilization on existing workers?&lt;/LI&gt;&lt;LI&gt;How quickly does autoscaling react once these conditions are met?&lt;/LI&gt;&lt;/UL&gt;&lt;P class=""&gt;A practical scenario: If I have a single worker with 8 cores and I have more tasks than cores for a prolonged period, will Spark immediately add another worker or does it wait for some threshold of sustained load?&lt;/P&gt;&lt;P class=""&gt;I’d appreciate insights from anyone who has worked with Databricks autoscaling in production. Any reference to official documentation or real-world examples of how Spark conditions must be met before a new worker is allocated would be very helpful.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Dec 2024 08:29:20 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102206#M9136</guid>
      <dc:creator>h_h_ak</dc:creator>
      <dc:date>2024-12-16T08:29:20Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102212#M9137</link>
      <description>&lt;P&gt;Databricks autoscaling is designed to dynamically adjust the number of worker nodes in a cluster based on workload demand, optimizing resource utilization and cost. Understanding the conditions under which Spark triggers autoscaling requires insight into how Databricks monitors and interprets the workload.&lt;/P&gt;</description>
      <pubDate>Mon, 16 Dec 2024 11:10:29 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102212#M9137</guid>
      <dc:creator>17abhishek</dc:creator>
      <dc:date>2024-12-16T11:10:29Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102213#M9138</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/118664"&gt;@h_h_ak&lt;/a&gt;&amp;nbsp;,&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Short Answer:&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;Autoscaling primarily depends on the number of pending tasks.&lt;/LI&gt;&lt;LI&gt;Workspaces on the Premium plan use optimized autoscaling, while those on the Standard plan use standard autoscaling.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Long Answer:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;Databricks autoscaling responds mainly to sustained backlogs of unscheduled tasks rather than CPU or memory usage alone. If the number of pending tasks consistently exceeds your current cluster capacity—meaning more tasks are queued than available cores can handle—Databricks will consider adding a new worker node.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Key Points:&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Pending Tasks as the Main Trigger:&lt;/STRONG&gt; Autoscaling monitors how many tasks remain queued. Persistent queues indicate that existing workers can’t keep up, prompting additional workers.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Not Instantaneous, But Sustained Load:&lt;/STRONG&gt; Spark waits to confirm that the increased demand isn’t just a short-lived spike. Only after tasks remain pending for a threshold period does scaling occur.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Indirect Role of CPU/Memory Utilization:&lt;/STRONG&gt; While CPU/memory affect task completion speed, autoscaling decisions are based on task queues rather than these metrics directly.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Timing and Reaction:&lt;/STRONG&gt; Adding a new worker typically takes a minute or so, ensuring scaling responds to stable workload increases rather than momentary fluctuations.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&lt;STRONG&gt;Useful Links:&lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://learn.microsoft.com/en-us/azure/databricks/compute/configure#autoscaling" target="_blank"&gt;https://learn.microsoft.com/en-us/azure/databricks/compute/configure#autoscaling&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://www.databricks.com/blog/2018/05/02/introducing-databricks-optimized-auto-scaling.html" target="_blank"&gt;https://www.databricks.com/blog/2018/05/02/introducing-databricks-optimized-auto-scaling.html&lt;/A&gt;&lt;/LI&gt;&lt;LI&gt;&lt;A href="https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation" target="_blank"&gt;https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Mon, 16 Dec 2024 11:15:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102213#M9138</guid>
      <dc:creator>filipniziol</dc:creator>
      <dc:date>2024-12-16T11:15:03Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102386#M9139</link>
      <description>&lt;P&gt;Hi &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/117376"&gt;@filipniziol&lt;/a&gt;,&lt;/P&gt;&lt;P class=""&gt;Great summary, thanks!&lt;/P&gt;&lt;P class=""&gt;It would interest me to know what the &lt;STRONG&gt;limits&lt;/STRONG&gt; are in more detail. For example:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;How many &lt;STRONG&gt;pending tasks&lt;/STRONG&gt; are needed to trigger a new worker or cluster?&lt;/LI&gt;&lt;LI&gt;How long does the &lt;STRONG&gt;CPU utilization&lt;/STRONG&gt; need to be above a certain threshold (e.g., &lt;STRONG&gt;&amp;gt; XX%&lt;/STRONG&gt;) before scaling occurs?&lt;/LI&gt;&lt;/UL&gt;&lt;P class=""&gt;Are there specific thresholds or configurable parameters that influence these decisions?&lt;/P&gt;&lt;P class=""&gt;Thanks again for the clarification!&lt;/P&gt;</description>
      <pubDate>Tue, 17 Dec 2024 13:38:16 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102386#M9139</guid>
      <dc:creator>h_h_ak</dc:creator>
      <dc:date>2024-12-17T13:38:16Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102458#M9140</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/118664"&gt;@h_h_ak&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;&lt;P&gt;Thank you for your follow-up questions!&lt;/P&gt;&lt;P&gt;While Databricks’ autoscaling implementation is proprietary and functions as a black box, we can gain a clearer understanding by examining Apache Spark’s open-source dynamic resource allocation mechanism.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Here are the files to investigate&lt;/STRONG&gt;:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ExecutorAllocationManager:&lt;/STRONG&gt; &lt;A href="https://github.com/apache/spark/blob/branch-3.5/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala" target="_self"&gt;ExecutorAllocationManager.scala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ExecutorAllocationClient:&amp;nbsp;&lt;/STRONG&gt;&lt;A href="https://github.com/apache/spark/blob/branch-3.5/core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala" target="_self"&gt;ExecutorAllocationClient.scala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ExecutorMonitor:&lt;/STRONG&gt;&lt;A href="https://github.com/apache/spark/blob/branch-3.5/core/src/main/scala/org/apache/spark/scheduler/dynalloc/ExecutorMonitor.scala" target="_self"&gt;ExecutorMonitor.scala&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;1. Number of Pending Tasks as the Primary Trigger&lt;/STRONG&gt;&lt;BR /&gt;Dynamic resource allocation in Spark primarily relies on the number of pending tasks in the scheduler’s queue. If the number of tasks waiting to be assigned exceeds the current executor capacity (i.e., more tasks than available cores), Spark considers adding new executor nodes. This mechanism ensures that the cluster can handle increased workloads by scaling out when necessary.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2. Scheduler Backlog Timeout (spark.dynamicAllocation.schedulerBacklogTimeout)&lt;/STRONG&gt;&lt;BR /&gt;The key configuration parameter here is parameter: spark.dynamicAllocation.schedulerBacklogTimeout.&amp;nbsp;This parameter defines how long Spark should wait while there is a sustained backlog of pending tasks before deciding to add new executors.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;3. Absence of Direct CPU Utilization Thresholds&lt;/STRONG&gt;&lt;BR /&gt;Spark’s dynamic allocation does not directly use CPU utilization metrics as triggers for scaling. Instead, it focuses on task backlog and executor idle times. While high CPU usage can indirectly lead to a task backlog (since tasks may take longer to complete), there are no explicit CPU utilization thresholds that Spark monitors to decide on scaling actions&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Summary&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Scaling Triggers:&lt;/STRONG&gt; Primarily based on a sustained backlog of pending tasks rather than direct CPU or memory utilization metrics.&lt;BR /&gt;&lt;STRONG&gt;Key Parameter:&lt;/STRONG&gt; spark.dynamicAllocation.schedulerBacklogTimeout defines how long Spark waits with a sustained backlog before scaling up.&lt;BR /&gt;&lt;STRONG&gt;Open-Source Insight:&lt;/STRONG&gt; While Databricks’ implementation may add proprietary enhancements, understanding Spark’s dynamic allocation provides a solid foundation for anticipating and configuring autoscaling behavior&lt;/P&gt;&lt;P&gt;Hope it helps&lt;/P&gt;</description>
      <pubDate>Wed, 18 Dec 2024 09:55:54 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/102458#M9140</guid>
      <dc:creator>filipniziol</dc:creator>
      <dc:date>2024-12-18T09:55:54Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/111906#M9141</link>
      <description>&lt;P&gt;In case I use state-full functions for processing the streaming data, like&amp;nbsp;&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;FlatMapGroupsWithStateFunction and a scaling event happens and a new node is added, how do I replicate my state in the new node?&amp;nbsp; &lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;does Databrics have any built in solution for this?&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Thu, 06 Mar 2025 11:06:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/111906#M9141</guid>
      <dc:creator>Mike_at_MM</dc:creator>
      <dc:date>2025-03-06T11:06:52Z</dc:date>
    </item>
    <item>
      <title>Re: Understanding Autoscaling in Databricks: Under What Conditions Does Spark Add a New Worker Node?</title>
      <link>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/148651#M11450</link>
      <description>&lt;P&gt;Is the above information true for job clusters as well? Looks like the enhanced auto scalar is only available for pipelines&lt;/P&gt;</description>
      <pubDate>Tue, 17 Feb 2026 23:37:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/understanding-autoscaling-in-databricks-under-what-conditions/m-p/148651#M11450</guid>
      <dc:creator>aranjan99</dc:creator>
      <dc:date>2026-02-17T23:37:33Z</dc:date>
    </item>
  </channel>
</rss>

