How to handle 100+ tables ETL through spark structured streaming?
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2022 02:15 PM
I am writing a streaming job which will be performing ETL for more than 130 tables. I would like to know is there any other better way to do this. Another solution I am thinking is to write separate streaming job for all tables.
source data is coming from CDC through Events Hub on real time.
Labels: