Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

Job cluster configuration for 24/7

Valued Contributor

Hi Team,

We intend to activate the job cluster around the clock.

We  consider the following parameters regarding cost:

 - Data volumes

- Client SLA for job completion

- Starting with a small cluster configuration

Please advise on any other options we should take into account.


Community Manager
Community Manager

Hi @Phani1When configuring a job cluster for 24/7 operation, it’s essential to consider cost, performance, and scalability.

Here are some recommendations based on your specified parameters:

  1. Data Volumes:

    • Analyze your data volumes carefully. If you have large data volumes, consider using a high-performance cluster with sufficient resources to handle the load efficiently.
    • Opt for autoscaling if your data volumes fluctuate throughout the day. Autoscaling dynamically adjusts the cluster size based on workload demands.
  2. Client SLA for Job Completion:

    • Understand your client’s Service Level Agreement (SLA) requirements. If strict SLAs are in place, prioritize reliability and responsiveness.
    • Consider using high-availability clusters to minimize downtime. These clusters automatically recover from failures and maintain consistent performance.
  3. Starting with a Small Cluster Configuration:

    • Starting with a small cluster is a prudent approach. It allows you to assess workload requirements without committing to excessive costs upfront.
    • Monitor cluster utilization and performance. As demand grows, scale up by adding more nodes or scale out by increasing the number of worker nodes.

Additional Considerations:

  • Cost Control: Regularly review your cluster usage and adjust resources as needed. Use cluster policies to enforce cost controls and allocate resources effectively1.
  • Instance Types: Choose instance types based on the workload. For compute-intensive tasks, use memory-optimized instances. For storage-heavy workloads, opt for storage-optimized instances2.
  • Scheduled Scaling: Consider scheduling cluster scaling based on workload patterns. Scale up during peak hours and down during off-peak times.
  • Spot Instances: If cost savings are critical, explore using spot instances (if supported by your platform). Spot instances are cheaper but can be preempted by the cloud provider.
  • Monitoring and Alerts: Set up monitoring and alerts to track cluster performance, resource utilization, and cost. Address anomalies promptly.

Remember that the optimal configuration depends on your specific use case, workload, and budget. Regularly assess and fine-tune your cluster settings to strike the right balance between cost and pe...342.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!