- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-16-2024 12:29 AM
Iām currently working with Databricks autoscaling configurations and trying to better understand how Spark decides when to spin up additional worker nodes. My cluster has a minimum of one worker and can scale up to five. I know that tasks are assigned to cores and that if more tasks are queued than available cores, Spark may consider adding a new workerāassuming autoscaling is enabled. But what specific conditions or metrics does Spark use to trigger the autoscaling event?
For example:
- Is it based solely on the number of pending tasks in the scheduler queue?
- Does it consider task completion times, memory usage, or CPU utilization on existing workers?
- How quickly does autoscaling react once these conditions are met?
A practical scenario: If I have a single worker with 8 cores and I have more tasks than cores for a prolonged period, will Spark immediately add another worker or does it wait for some threshold of sustained load?
Iād appreciate insights from anyone who has worked with Databricks autoscaling in production. Any reference to official documentation or real-world examples of how Spark conditions must be met before a new worker is allocated would be very helpful.
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-18-2024 01:55 AM
Hi @h_h_ak ,
Thank you for your follow-up questions!
While Databricksā autoscaling implementation is proprietary and functions as a black box, we can gain a clearer understanding by examining Apache Sparkās open-source dynamic resource allocation mechanism.
Here are the files to investigate:
ExecutorAllocationManager: ExecutorAllocationManager.scala
ExecutorAllocationClient: ExecutorAllocationClient.scala
ExecutorMonitor:ExecutorMonitor.scala
1. Number of Pending Tasks as the Primary Trigger
Dynamic resource allocation in Spark primarily relies on the number of pending tasks in the schedulerās queue. If the number of tasks waiting to be assigned exceeds the current executor capacity (i.e., more tasks than available cores), Spark considers adding new executor nodes. This mechanism ensures that the cluster can handle increased workloads by scaling out when necessary.
2. Scheduler Backlog Timeout (spark.dynamicAllocation.schedulerBacklogTimeout)
The key configuration parameter here is parameter: spark.dynamicAllocation.schedulerBacklogTimeout. This parameter defines how long Spark should wait while there is a sustained backlog of pending tasks before deciding to add new executors.
3. Absence of Direct CPU Utilization Thresholds
Sparkās dynamic allocation does not directly use CPU utilization metrics as triggers for scaling. Instead, it focuses on task backlog and executor idle times. While high CPU usage can indirectly lead to a task backlog (since tasks may take longer to complete), there are no explicit CPU utilization thresholds that Spark monitors to decide on scaling actions
Summary
Scaling Triggers: Primarily based on a sustained backlog of pending tasks rather than direct CPU or memory utilization metrics.
Key Parameter: spark.dynamicAllocation.schedulerBacklogTimeout defines how long Spark waits with a sustained backlog before scaling up.
Open-Source Insight: While Databricksā implementation may add proprietary enhancements, understanding Sparkās dynamic allocation provides a solid foundation for anticipating and configuring autoscaling behavior
Hope it helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-16-2024 03:10 AM
Databricks autoscaling is designed to dynamically adjust the number of worker nodes in a cluster based on workload demand, optimizing resource utilization and cost. Understanding the conditions under which Spark triggers autoscaling requires insight into how Databricks monitors and interprets the workload.
Abhishek Upadhyay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-16-2024 03:15 AM
Hi @h_h_ak ,
Short Answer:
- Autoscaling primarily depends on the number of pending tasks.
- Workspaces on the Premium plan use optimized autoscaling, while those on the Standard plan use standard autoscaling.
Long Answer:
Databricks autoscaling responds mainly to sustained backlogs of unscheduled tasks rather than CPU or memory usage alone. If the number of pending tasks consistently exceeds your current cluster capacityāmeaning more tasks are queued than available cores can handleāDatabricks will consider adding a new worker node.
Key Points:
- Pending Tasks as the Main Trigger: Autoscaling monitors how many tasks remain queued. Persistent queues indicate that existing workers canāt keep up, prompting additional workers.
- Not Instantaneous, But Sustained Load: Spark waits to confirm that the increased demand isnāt just a short-lived spike. Only after tasks remain pending for a threshold period does scaling occur.
- Indirect Role of CPU/Memory Utilization: While CPU/memory affect task completion speed, autoscaling decisions are based on task queues rather than these metrics directly.
- Timing and Reaction: Adding a new worker typically takes a minute or so, ensuring scaling responds to stable workload increases rather than momentary fluctuations.
Useful Links:
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-17-2024 05:38 AM
Hi @filipniziol,
Great summary, thanks!
It would interest me to know what the limits are in more detail. For example:
- How many pending tasks are needed to trigger a new worker or cluster?
- How long does the CPU utilization need to be above a certain threshold (e.g., > XX%) before scaling occurs?
Are there specific thresholds or configurable parameters that influence these decisions?
Thanks again for the clarification!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā12-18-2024 01:55 AM
Hi @h_h_ak ,
Thank you for your follow-up questions!
While Databricksā autoscaling implementation is proprietary and functions as a black box, we can gain a clearer understanding by examining Apache Sparkās open-source dynamic resource allocation mechanism.
Here are the files to investigate:
ExecutorAllocationManager: ExecutorAllocationManager.scala
ExecutorAllocationClient: ExecutorAllocationClient.scala
ExecutorMonitor:ExecutorMonitor.scala
1. Number of Pending Tasks as the Primary Trigger
Dynamic resource allocation in Spark primarily relies on the number of pending tasks in the schedulerās queue. If the number of tasks waiting to be assigned exceeds the current executor capacity (i.e., more tasks than available cores), Spark considers adding new executor nodes. This mechanism ensures that the cluster can handle increased workloads by scaling out when necessary.
2. Scheduler Backlog Timeout (spark.dynamicAllocation.schedulerBacklogTimeout)
The key configuration parameter here is parameter: spark.dynamicAllocation.schedulerBacklogTimeout. This parameter defines how long Spark should wait while there is a sustained backlog of pending tasks before deciding to add new executors.
3. Absence of Direct CPU Utilization Thresholds
Sparkās dynamic allocation does not directly use CPU utilization metrics as triggers for scaling. Instead, it focuses on task backlog and executor idle times. While high CPU usage can indirectly lead to a task backlog (since tasks may take longer to complete), there are no explicit CPU utilization thresholds that Spark monitors to decide on scaling actions
Summary
Scaling Triggers: Primarily based on a sustained backlog of pending tasks rather than direct CPU or memory utilization metrics.
Key Parameter: spark.dynamicAllocation.schedulerBacklogTimeout defines how long Spark waits with a sustained backlog before scaling up.
Open-Source Insight: While Databricksā implementation may add proprietary enhancements, understanding Sparkās dynamic allocation provides a solid foundation for anticipating and configuring autoscaling behavior
Hope it helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
3 weeks ago
In case I use state-full functions for processing the streaming data, like

