Hi @Shiva3,
Maybe you can try this option in Delta Lake in Unity Catalog may have optimizedWrites enabled by default, which can reduce the number of files by automatically coalescing partitions during writes.
# Disable auto-compaction and optimized writes
spark.conf.set("spark.databricks.delta.autoCompact.enabled", "false")
spark.conf.set("spark.databricks.delta.optimizeWrite.enabled", "false")
Setting both configurations to false ensures that Delta Lake doesn’t automatically combine files or reduce partitions, allowing df.repartition(8) to retain 8 distinct files, then you can change the config again.
Try and comment!
Regards
Alfonso Gallardo
-------------------
I love working with tools like Databricks, Python, Azure, Microsoft Fabric, Azure Data Factory, and other Microsoft solutions, focusing on developing scalable and efficient solutions with Apache Spark