cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Resource exhaustion when using default apply_changes python functionality

lena1
New Contributor

Hello!

We are currently setting up streaming  CDC pipelines for more than 500 tables. 

Due to the high number of tables, we split our tables into multiple pipelines, we use multiple DLT pipelines per layer: bronze, silver gold

In silver, we only upsert using the apply_changes function as describe here : https://docs.databricks.com/en/delta-live-tables/python-ref.html#cdc

However, we see that when we run silver pipelines, that there is an exhaustion in memory as well as the nodes which are used. 

So now, in order to reduce costs, we are wondering why this is the case and how to optimize it. It is a bit hard as the code here is not open source and we couldn't find any reference yet for optimizing the apply_changes function. 

Does anybody know what is happening in the background/ why it is taking that many resources and how to optimize it? 

Any help is welcome!

Thank you in advance! 

1 REPLY 1

Wojciech_BUK
Contributor III

Hi Lena1,
there is no magic behind the scene.
If you write readstream from bronze table and writestream with ForEachBatch(function) and in function you will write MERGE stattemnt this will have similiar performance.
Maybe there is a lot of shuffeling happening or source tables are not optimized (a lot of small files).
This is hard to tell w/o looking on data and metrics ๐Ÿ˜ž 
Wojciech 


Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.