I am creating a Data Pipeline as shown below.

1. Files from multiple input source is coming to respective folder in bronze layer.
2. Using Databricks to perform Transformation and load transformed data to Azure SQL. also to ADLS Gen2 Silver (not shown in figure).
How to write pyspark code which can handle multiple folder as well multiple files to read and transformed through metadata table.
I want to control execution of code through Metadata table, is there any other way to parameterized it.
also will it be possible to do schema validation with metadata table approach.
Please help.
Pardon me if it sound unrealistic.
Thanks a lot