Thank you for your answer @Lakshay 🙂
I am aware of the "full refresh" option, and I wasn't considering it because I didn't think it could solve all my issues at once. In fact, I thought it would work for updates and adds (e.g. changes in columns) provided that it overwrites all the tables and metadata already in place and reprocesses all the files in the cloud_files() directory.
On the other hand, my doubt is that this solution couldn't fulfil my potential need for complete deletion of some of the tables with the related metadata, unless "refresh all" means that if I remove the definition of a table from my pipeline code the table and its related metadata are removed from the target directory and schema.