Hello!
We are currently setting up streaming CDC pipelines for more than 500 tables.
Due to the high number of tables, we split our tables into multiple pipelines, we use multiple DLT pipelines per layer: bronze, silver gold
In silver, we only upsert using the apply_changes function as describe here : https://docs.databricks.com/en/delta-live-tables/python-ref.html#cdc
However, we see that when we run silver pipelines, that there is an exhaustion in memory as well as the nodes which are used.
So now, in order to reduce costs, we are wondering why this is the case and how to optimize it. It is a bit hard as the code here is not open source and we couldn't find any reference yet for optimizing the apply_changes function.
Does anybody know what is happening in the background/ why it is taking that many resources and how to optimize it?
Any help is welcome!
Thank you in advance!