cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Community Articles
Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Lakeflow Spark Declarative Pipelines now decouples pipeline and tables lifecycle

szymon_dybczak
Esteemed Contributor III

Ever deleted a pipelineโ€ฆ and accidentally wiped out the data with it?

Databricks just introduced a beta feature that lets you decouple pipelines from the tables they manage.
Lakeflow Spark Declarative Pipelines were desinged with data-as-code approach. A pipeline defines its tables declaratively, so deleting a pipeline also deletes its associated Materialized Views, Streaming Tables, and Views. This is useful for customers using CI/CD best practices.
The โ€œdata-as-codeโ€ approach worked great for strict CI/CD setups - but real-world scenarios often need more flexibility.
So that's why you can now delete a pipeline without deleting its data with a simple parameter -> cascade=false

This means:

โ€ข Your Materialized Views, Streaming Tables, and Views stay intact
โ€ข Data remains fully queryable
โ€ข You can reattach tables to a pipeline anytime and resume processing

This is a huge step toward decoupling compute from data lifecycle -something many teams have been asking for as adoption grows beyond pure CI/CD use cases.

Itโ€™s available for Unity Catalog pipelines using the default publishing mode - and definitely worth exploring if you're working with modern data platforms.

 

szymon_dybczak_0-1776068643706.png

 

 

 

0 REPLIES 0