@Kuldeep Chitrakar
First of all - instead of running notebooks one by one through MasterRawNotebook, you could use Workflows -> Jobs (or any other scheduler, ex. Airlfow, ADF) to run them in parallel and save some time.
Creating notebooks for each table - for loading Raw to Bronze it's possible to create one generic notebook that will do the work for you (it depends on the raw filetype, but with ex. Parquet it's doable). Write your code as generic as you can. Anyways, doint one notebook per table is also fine.
Folder structure - you need to find your own way of doing things 🙂
Here's what i'm using (it may differ project to project):
- Config (Folder) - it keeps all notebooks that handle the configuration, such as authenticating with the external databases/tools; mounting storage etc.
- RawToBronze (Folder) - notebooks ingesting data from Raw to Bronze
- BronzeToSliver (Folder) - notebooks transforming data from Bronze to Silver
- SilverToGold (Folder) - notebooks transforming data from Silver to Gold
- GoldToXxx (Folder) - notebook that handles data transfer between Lakehouse and any other tool that we're using (ex. Synapse or SQL Database),
- Lib.py (File) - notebook that keeps all custom-made functions/classess