Don't confuse DLT and LDP (Lakeflow Declarative Pipelines) as though behind the scenes they work very similarly, the UI and the developer experience has changed immensely and very important new features have been added. I used DLT extensively and in a very dynamic way where the tables to process were coming from an ever changing metadata file. Let's say I ingested all 68 tables from AdventureWorks. If any was removed from the metadata, it wasn't only skipped by DLT but removed entirely. That whole approach of the DLT pipeline having ownership of the created objects was a show stopper for us and we regularly asked Databricks to prioritise the change of this behaviour.
To their credit they admitted that there's a better way and though it took time to change it, the LDP version addresses that by separating flows from objects - see more details here: https://learn.microsoft.com/en-us/azure/databricks/dlt/concepts#key-concepts:
"A flow reads data from a source, applies user-defined processing logic, and writes the result into a target."
I only know this yet in theory as I haven't had the chance to give it another go since the announcement of it.