How to handle 100+ tables ETL through spark structured streaming?

Zair
New Contributor III

I am writing a streaming job which will be performing ETL for more than 130 tables. I would like to know is there any other better way to do this. Another solution I am thinking is to write separate streaming job for all tables.

source data is coming from CDC through Events Hub on real time.