hello @noorbasha534
That's very interesting topic regarding fine-tuning file sizes under delta table.
Answering Your questions:
1)
I use spark.databricks.delta.optimize.maxFileSize and set maximum file size for optimize command. Its working for me just fine in most cases. OPTIMIZE is creating new version of delta table, in my scenario with file sizes close to maximum limit. Please remember its maximum limit, not desired size of parquet files. Final parquet file size depends of other factors such as data distribution across partitions, clustering keys, is there sufficient data in Your table.
2)
Larger files (approaching ~1GB) can impact MERGE performance because:
a) MERGE operations need to rewrite entire files when any record in that file is affected
b) Larger files mean more data movement even for small changes
c) The overhead increases with file size, especially for selective updates
I would recommend using tuneFileSizesForRewrites for Your silver layer.
3)
I would say YES, go for larger maxFileSize limit, but I belive it should not be larger than 1 GB.
As an additional resources I would recommend to take a look at this: delta tune file size
Best,
Radek.