Hi everyone! I'm new to Databricks and moving my first steps with Delta Live Tables, so please forgive my inexperience. I'm building my first DLT pipeline and there's something that I can't really grasp: how to clear all the objects generated or updated by the pipeline execution (tables and metadata). I'll probably need to make some changes and adds over time as my understanding of the subject progresses, and I'd like to be able to rerun the pipeline from scratch and reprocess all the data (I'm simulating the data stream and I trigger the data inflow).
I understand (correct me if I'm wrong) that streaming live tables have a way of avoiding reprocessing files in the cloud_files() directory by storing some data about files that have already been processed, but, while I believe a simple DROP would do for the data tables, I can't imagine how to get a completely clean slate considering all the extra data being stored when the pipeline is run.
Thanks for your help ๐