cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Merge Operation is very slow for S/4 Table ACDOCA

Kishan1003
New Contributor

Hello,

we have a scenario in Databricks where every day  we get 60-70 million records  and it takes a lot of time to merge the data into 28 billion records which is already sitting there . The time taken to rewrite the files which are affected is too much. Merge time is not directly proportional to number of records in delta but solely depends on number of files delta is updating. Table is partitioned on Period and each period has around 800 million records which is sitting there and delta records are present in 3 years  basically in all 36 partition and sometimes it can go till 2020 also.

Please note this is a one to one table from source with no logic at all.

we have tried all the spark settings , Optimize the table , Zordering , Big cluster with Photon ( E16 ) but still it takes a lot of time to rewrite the updated files.

can anyone suggest something or if someone has done similar before and improved the performance.

Table Size is 1.4 TB

Columns - 563

Partioned by Period

Time take to merge and rewrite files - over 10 hours to update 3000 files and files are also not that huge in terms of size.

Storage - Azure Blob Gen 2 in Parquet format

Type of Table  - Delta

if someone could help then it would be great 🙂

 

1 REPLY 1

177991
New Contributor II

Hi @Kishan1003  did you find something helpful? Im dealing with a similar situation, acdoca table on my side is around 300M (fairly smaller), and incoming daily data is usually around 1M. I have try partition using period, like fiscyearper column, zorder and dynamic prunning. So far the best time of the merge process has been around 1 hour. I want to understand if I can achieve a better performance before scaling. 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group