Hello @ManojkMohan ,
If you try to run a DLT pipeline on an (e.g.) all-purpose compute cluster, it will fail. DLT pipelines require a DLT job computer cluster.
To run it, create a new pipeline in the Databricks UI, assign your DLT notebook to it, and start the pipeline:

The reason it does not work on all-purpose compute is that the DLT module is not supported on Spark Connect clusters. The DLT runtime runs on a specialized job compute environment, which is different from the standard all-purpose compute runtime. It provides capabilities and configurations tailored specifically for Delta Live Tables. Because of these differences, a DLT pipeline cannot be executed on any all-purpose cluster, it must run on a DLT job cluster.
Hope that helps. Let me know if you have any other questions!
Best, Ilir