Merge Operation is very slow for S/4 Table ACDOCA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2023 08:10 PM
Hello,
we have a scenario in Databricks where every day we get 60-70 million records and it takes a lot of time to merge the data into 28 billion records which is already sitting there . The time taken to rewrite the files which are affected is too much. Merge time is not directly proportional to number of records in delta but solely depends on number of files delta is updating. Table is partitioned on Period and each period has around 800 million records which is sitting there and delta records are present in 3 years basically in all 36 partition and sometimes it can go till 2020 also.
Please note this is a one to one table from source with no logic at all.
we have tried all the spark settings , Optimize the table , Zordering , Big cluster with Photon ( E16 ) but still it takes a lot of time to rewrite the updated files.
can anyone suggest something or if someone has done similar before and improved the performance.
Table Size is 1.4 TB
Columns - 563
Partioned by Period
Time take to merge and rewrite files - over 10 hours to update 3000 files and files are also not that huge in terms of size.
Storage - Azure Blob Gen 2 in Parquet format
Type of Table - Delta
if someone could help then it would be great 🙂
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-14-2023 03:17 PM
Hi @Kishan1003 did you find something helpful? Im dealing with a similar situation, acdoca table on my side is around 300M (fairly smaller), and incoming daily data is usually around 1M. I have try partition using period, like fiscyearper column, zorder and dynamic prunning. So far the best time of the merge process has been around 1 hour. I want to understand if I can achieve a better performance before scaling.

