cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements
Stay up-to-date with the latest announcements from Databricks. Learn about product updates, new features, and important news that impact your data analytics workflow.
cancel
Showing results for 
Search instead for 
Did you mean: 

The Evolution of Data Engineering with Serverless Compute

Tushar_Parekar
Databricks Employee
Databricks Employee

Serverless compute now powers Notebooks, Lakeflow Jobs, and Spark Declarative Pipelines (SDP) on Databricks, taking care of infrastructure so data teams can focus on building and running workloads instead of managing clusters.

Key highlights

  • No cluster management – Networking, sizing, security hardening, and runtime upgrades are handled automatically for notebooks, jobs, and SDP.
  • Auto-improving performance and cost – Over the last year, serverless workloads have become ~80% faster and up to 70% more cost-efficient without any user changes.
  • More reliable runs – Automatic scaling and failover across instances and regions have delivered 89% more successful runs compared to classic clusters.
  • Versionless upgrades – Serverless has applied 25 DBR upgrades across 4.5B+ workloads with a 99.998% success rate, continuously rolling out performance and security improvements behind the scenes.
  • Performance modes for jobs and pipelines – A Performance-optimized mode starts in seconds and typically runs about 2x faster, while Standard mode can cut job costs by up to 70% for batch workloads.
  • Built-in cost governance – Unified billing, budget policies, and intelligent timeouts make it easier to see, control, and attribute serverless spend across teams and projects.

In the full post, you’ll see concrete examples of how teams are using serverless compute to cut costs, speed up pipelines, and reduce operational noise.

🔗 Read the full post here 👈

0 REPLIES 0