artsheiko
Databricks Employee
Databricks Employee

Hi, I guess to answer your question it might be helpful to get more details on what you're trying to achieve and the bottleneck that you encounter now.

Indeed handle the processing of 130 tables in one monolith could be challenging as the business rules might change in the future, one day the frequency of processing may also become different (for example, you will understand that some information can be processed in a batch mode).

It will also be useful to consider this problem from the point of view of the team : in the case of processing within the same streaming job, in the future, most likely you will not be able to distribute tasks to develop / support this processing simultaneously among several team members.