Hello,
we have a scenario in Databricks where every day we get 60-70 million records and it takes a lot of time to merge the data into 28 billion records which is already sitting there . The time taken to rewrite the files which are affected is too much. Merge time is not directly proportional to number of records in delta but solely depends on number of files delta is updating. Table is partitioned on Period and each period has around 800 million records which is sitting there and delta records are present in 3 years basically in all 36 partition and sometimes it can go till 2020 also.
Please note this is a one to one table from source with no logic at all.
we have tried all the spark settings , Optimize the table , Zordering , Big cluster with Photon ( E16 ) but still it takes a lot of time to rewrite the updated files.
can anyone suggest something or if someone has done similar before and improved the performance.
Table Size is 1.4 TB
Columns - 563
Partioned by Period
Time take to merge and rewrite files - over 10 hours to update 3000 files and files are also not that huge in terms of size.
Storage - Azure Blob Gen 2 in Parquet format
Type of Table - Delta
if someone could help then it would be great ๐