I'll try to answer this in the simplest possible way ๐
1. Spark is an imperative programming framework. You tell it what it to do, it does it. DLT is declarative - you describe what you want the datasets to be (i.e. the transforms), and it takes care of the rest including orchestrating updates, inserts and merges in the right order. It's actually quite similar to dbt, if you are familiar with that tool.
2. DLT runs on fully managed infrastructure vs. with Spark you have to configure and manage the compute yourself. DLT is typically much cheaper on a price/performance basis as a result.
So, in summary, DLT is a fully managed, declarative ETL framework where Databricks takes care of all the infrastructure. I generally recommend starting with it for ETL projects.