Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

Job Cluster's CPU utilization goes higher than 100% few times during the workload run

New Contributor

I have Data Engineering Pipeline workload that run on Databricks.

Job cluster has following configuration :- 

Worker  i3.4xlarge with 122 GB memory and 16 cores

Driver i3.4xlarge with 122 GB memory and 16 cores ,

Min Worker -4 and Max Worker 8


We noticed that CPU utlization goes higher than 100% few times as a spike.

Can someone help me to understand following questions

1- Are these High CPU Utilization Spikes Problematic

2- Is there any way to check the DBX Job cluster log to see the CPU utilization

3- What is the max limit for CPU utilization and how does this whole things work.


Community Manager
Community Manager

Hi @DBX-2024,

Let’s break down your questions:

  1. High CPU Utilization Spikes: Are They Problematic?

    • High CPU utilization spikes can be problematic depending on the context. Here are some considerations:
      • Normal Behavior: It’s common for CPU utilization to spike during resource-intensive tasks or when processing large amounts of data. If these spikes occur occasionally and don’t impact overall performance, they might be within acceptable limits.
      • Impact on Other Workloads: If the spikes affect other workloads running on the same cluster (e.g., causing delays or resource contention), they could be problematic.
      • Resource Starvation: Consistently high CPU utilization may lead to resource starvation (e.g., insufficient resources for other tasks).
      • Monitoring and Thresholds: Consider setting thresholds for acceptable CPU utilization and monitoring the cluster regularly.
      • Tuning and Optimization: Investigate whether specific jobs or tasks are causing the spikes and optimize them if needed.
  2. Checking DBX Job Cluster Logs for CPU Utilization:

    • Databricks provides logs and metrics for monitoring cluster performance. To check CPU utilization:
      • Cluster Metrics: In the Databricks workspace, navigate to the cluster details page. Look for metrics related to CPU usage.
      • Driver and Worker Logs: Check the logs for any warnings or errors related to CPU utilization. You can access these logs via the Databricks UI or programmatically using APIs.
      • Spark UI: When a job runs, the Spark UI provides detailed information about resource usage, including CPU. You can access it by clicking on the job ID in the Databricks Jobs tab.
  3. Max Limit for CPU Utilization and How It Works:

    • There isn’t a fixed “max limit” for CPU utilization. It depends on the cluster configuration, workload, and available resources.
    • Databricks dynamically allocates resources based on the workload. If a task needs more CPU, it gets it (up to the cluster’s capacity).
    • Autoscaling: Databricks can automatically scale the cluster (adding or removing workers) based on demand. This helps handle spikes efficiently.
    • Concurrency: Consider the number of concurrent tasks. If too many tasks compete for CPU, utilization may spike.
    • Resource Management: Databricks manages resources (CPU, memory) across jobs, notebooks, and tasks to optimize performance.
    • User-Defined Limits: You can set limits (e.g., max workers, auto-termination) to control resource allocation.

Remember that context matters, and what’s considered “high” utilization depends on your specific use case. Regular monitoring, tuning, and understanding your workload patterns will help you manage CPU utilization effectively. 😊

If you need further assistance or have more questions, feel free to ask! 🚀

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!