cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Whats the difference between Spark Pipeline and Delta Live Table Pipelines? in which scenarios we should leverage DLT pipeline over Spark pipeline

Srikanth_Gupta_
Databricks Employee
Databricks Employee
 
2 REPLIES 2

rgite
New Contributor II

I also have the same query, someone can shed some light on this?

BilalAslamDbrx
Databricks Employee
Databricks Employee

I'll try to answer this in the simplest possible way 🙂

1. Spark is an imperative programming framework. You tell it what it to do, it does it. DLT is declarative - you describe what you want the datasets to be (i.e. the transforms), and it takes care of the rest including orchestrating updates, inserts and merges in the right order. It's actually quite similar to dbt, if you are familiar with that tool.

2. DLT runs on fully managed infrastructure vs. with Spark you have to configure and manage the compute yourself. DLT is typically much cheaper on a price/performance basis as a result.

 

So, in summary, DLT is a fully managed, declarative ETL framework where Databricks takes care of all the infrastructure. I generally recommend starting with it for ETL projects.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now