<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: In Unity Catalog repartition method issue in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/129859#M48615</link>
    <description>&lt;P&gt;Thank you &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/130106"&gt;@agallard&lt;/a&gt;&amp;nbsp;,&amp;nbsp;it worked, you are right&amp;nbsp;&lt;SPAN&gt;Unity Catalog have optimizedWrites enabled by default.&lt;/SPAN&gt;&lt;/P&gt;</description>
    <pubDate>Wed, 27 Aug 2025 03:25:38 GMT</pubDate>
    <dc:creator>Shiva3</dc:creator>
    <dc:date>2025-08-27T03:25:38Z</dc:date>
    <item>
      <title>In Unity Catalog repartition method issue</title>
      <link>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/96631#M39308</link>
      <description>&lt;P&gt;We are in the process of upgrading our notebooks to Unity Catalog. Previously, I was able to write data to an external Delta table using df.repartition(8).write. Save('path'), which correctly created multiple files. However, during the upgrade, in testing phase , this approach no longer produces the expected output.&lt;/P&gt;&lt;P&gt;I attempted to disable auto-compaction with spark.conf.set("spark.databricks.delta.autoCompact.enabled", "false"), but the operation still results in only one Parquet file being created in S3, rather than the intended 8. I need assistance to resolve this issue with partitioning and file output after the Unity Catalog upgrade.&lt;BR /&gt;Please help.&lt;/P&gt;</description>
      <pubDate>Tue, 29 Oct 2024 11:44:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/96631#M39308</guid>
      <dc:creator>Shiva3</dc:creator>
      <dc:date>2024-10-29T11:44:09Z</dc:date>
    </item>
    <item>
      <title>Re: In Unity Catalog repartition method issue</title>
      <link>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/96683#M39323</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/109530"&gt;@Shiva3&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;Maybe you can try this option in&amp;nbsp;Delta Lake in Unity Catalog may have optimizedWrites enabled by default, which can reduce the number of files by automatically coalescing partitions during&amp;nbsp;writes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;# Disable auto-compaction and optimized writes

spark.conf.set("spark.databricks.delta.autoCompact.enabled", "false")
spark.conf.set("spark.databricks.delta.optimizeWrite.enabled", "false")&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Setting both configurations to false ensures that Delta Lake doesn’t automatically combine files or reduce partitions, allowing df.repartition(8) to retain 8 distinct files, then you can change the config again.&lt;/P&gt;&lt;P&gt;Try and comment!&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 29 Oct 2024 16:46:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/96683#M39323</guid>
      <dc:creator>agallard</dc:creator>
      <dc:date>2024-10-29T16:46:57Z</dc:date>
    </item>
    <item>
      <title>Re: In Unity Catalog repartition method issue</title>
      <link>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/129859#M48615</link>
      <description>&lt;P&gt;Thank you &lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/130106"&gt;@agallard&lt;/a&gt;&amp;nbsp;,&amp;nbsp;it worked, you are right&amp;nbsp;&lt;SPAN&gt;Unity Catalog have optimizedWrites enabled by default.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Aug 2025 03:25:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/in-unity-catalog-repartition-method-issue/m-p/129859#M48615</guid>
      <dc:creator>Shiva3</dc:creator>
      <dc:date>2025-08-27T03:25:38Z</dc:date>
    </item>
  </channel>
</rss>

