- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 03:09 AM
My task is to sync 300 tables from on prem sql server to delta lake.
I will load CDC from Raw. First step is to move CDC data to bronze with autoloader. Then using delta stream get changes from bronze, make simple datatype changes and merge this dataset to silver.
Previously I had just 20 tables, so I made 20 notebooks. But now with 300 tables seems to me little bit too many notebooks.
I was thinking to make one notebook with loop through all tables and skip datatype changes. This approach would not be running in parallel in databricks job, right?
Is there any way to generate notebooks based on some metadata (table name, schema)?
Is it a good idea to run job where is 300 notebooks? Number of records from CDC will not be in millions, but let's say in thousands.
Thanks.
- Labels:
-
Delta Lake
-
Workflows
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 03:19 AM - edited 07-29-2024 03:20 AM
Hi @alesventus ,
You can apply metadata/config driven approch. You can create control table (or json/yaml file) with all information that are required for processing like:
- table name
- target table
- table primary keys
- transformation to apply
And then loop on each table and pass parameters to generic notebook.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2024 03:19 AM - edited 07-29-2024 03:20 AM
Hi @alesventus ,
You can apply metadata/config driven approch. You can create control table (or json/yaml file) with all information that are required for processing like:
- table name
- target table
- table primary keys
- transformation to apply
And then loop on each table and pass parameters to generic notebook.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2024 04:51 AM
I followed your advice, and it worked pretty well. I have generic notebook which accepts parameters as table name and primary keys.
In databricks with CLI command databricks jobs create --json {...} I can create job with hundreds of tasks in one shot. Skeleton of the jobs is very similar and tasks I generate in excel.
Thanks for idea.

