cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Serverless compute vs Job cluster

vaibhavaher2025
New Contributor

Hi Guys,
For running the job with varying workload what should I use ? Serverless cluster or Job compute ?
What are positives and negatives?

(I'll be running my notebook from Azure data factory)

2 REPLIES 2

MariuszK
Contributor III

Hi
If you use PySpark for data processing than Job compute it has:
- lower cost
- support for PySpark
- flexible configuration
cons:
- Slower startup time

Serverless Warehouse:
- Faster startup time
- Dedicated for SQL
- Lower management
cons:
- not supporting PySpark
- more expensive for compute unit (Photon acceleration)
- less customization

KaranamS
Contributor III

It depends on cost, performance and startup time needed for your use-case.

Serverless compute is usually preferred choice because of its fast startup time and dynamic scaling. However, if your workload is long-running and predictable, job compute with auto scaling might be more cost-effective.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now