โ06-23-2021 02:28 PM
I see that Delta Lake has an OPTIMIZE command and also table properties for Auto Optimize. What are the differences between these and when should I use one over the other?
โ06-23-2021 02:36 PM
The OPTIMIZE command is a SQL command that can be run regularly or Ad Hoc. What it does is pack small files into larger files. Additionally, you can specify predicates to only run the command on a subset of a table, and also specify that you want to ZORDER on specific columns of the table.
Auto Optimize is a table property that consists of two parts: Optimized Writes and Auto Compaction. Once enabled on a table, all writes to that table will be carried out according to the config. Optimized Writes lowers the number of files output per write, whereas Auto Compaction will perform a more selective version of the OPTIMIZE SQL command after each write, and will bin pack partitions that have a large number of small files per partition (50 small files at time of writing).
If you want to ZORDER a table, you must use the OPTIMIZE SQL command. Otherwise, you can follow guidelines for Auto Optimize found here
โ06-24-2021 12:18 PM
โ03-13-2025 10:08 AM
Is this still valid answer in 2025 ? https://docs.databricks.com/aws/en/delta/tune-file-size#auto-compaction-for-delta-lake-on-databricks
โ12-04-2025 07:26 PM
Yes, it is if you are using external tables. If you are using Unity Catalog managed tables, you should just use Predictive Optimization.
โ12-05-2025 09:59 AM
Auto Optimize = automatically reduces small files during writes. Best for ongoing ETL.
OPTIMIZE = manual compaction + Z-ORDER for improving performance on existing data.
They are complementary, not competing.
Most teams use Auto Optimize for daily ingestion and still run scheduled OPTIMIZE jobs weekly or monthly for Z-ORDER and deep compaction.