Ever deleted a pipelineโฆ and accidentally wiped out the data with it?
Databricks just introduced a beta feature that lets you decouple pipelines from the tables they manage.
Lakeflow Spark Declarative Pipelines were desinged with data-as-code approach. A pipeline defines its tables declaratively, so deleting a pipeline also deletes its associated Materialized Views, Streaming Tables, and Views. This is useful for customers using CI/CD best practices.
The โdata-as-codeโ approach worked great for strict CI/CD setups - but real-world scenarios often need more flexibility.
So that's why you can now delete a pipeline without deleting its data with a simple parameter -> cascade=false
This means:
โข Your Materialized Views, Streaming Tables, and Views stay intact
โข Data remains fully queryable
โข You can reattach tables to a pipeline anytime and resume processing
This is a huge step toward decoupling compute from data lifecycle -something many teams have been asking for as adoption grows beyond pure CI/CD use cases.
Itโs available for Unity Catalog pipelines using the default publishing mode - and definitely worth exploring if you're working with modern data platforms.
