<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Delta merge file size control in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11509#M6457</link>
    <description>&lt;P&gt;Hello community!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have a rather weird issue where a delta merge is writing very big files (~1GB) that slow down my pipeline. Here is some context:&lt;/P&gt;&lt;P&gt;I have a dataframe containg updates for several dates in the past. Current and last day contain the vast amount of rows (&amp;gt;95%) and the rest are distributed in older days (around 100 other unique dates). My target dataframe is partitioned by date.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The issue I have is that when the merge operation is writing files I end up writing 2-3 files on the largest date partition, resulting to 2-3 files of around 1GB. Thus my whole pipeline is blocked by the write of these files that takes much longer than the other ones. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have played with all the evident configurations such as:&lt;/P&gt;&lt;P&gt;delta.tuneFileSizesForRewrites&lt;/P&gt;&lt;P&gt;delta.targetFileSize&lt;/P&gt;&lt;P&gt;delta.merge.enableLowShuffle&lt;/P&gt;&lt;P&gt;everything seems to be ignored and the files remain at this scale.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;note: running on DBR 10.0 / delta.optimizedWrites.enabled set to true&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is there anything that I am missing?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you in advance!&lt;/P&gt;</description>
    <pubDate>Thu, 04 Nov 2021 22:23:46 GMT</pubDate>
    <dc:creator>pantelis_mare</dc:creator>
    <dc:date>2021-11-04T22:23:46Z</dc:date>
    <item>
      <title>Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11509#M6457</link>
      <description>&lt;P&gt;Hello community!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have a rather weird issue where a delta merge is writing very big files (~1GB) that slow down my pipeline. Here is some context:&lt;/P&gt;&lt;P&gt;I have a dataframe containg updates for several dates in the past. Current and last day contain the vast amount of rows (&amp;gt;95%) and the rest are distributed in older days (around 100 other unique dates). My target dataframe is partitioned by date.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The issue I have is that when the merge operation is writing files I end up writing 2-3 files on the largest date partition, resulting to 2-3 files of around 1GB. Thus my whole pipeline is blocked by the write of these files that takes much longer than the other ones. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have played with all the evident configurations such as:&lt;/P&gt;&lt;P&gt;delta.tuneFileSizesForRewrites&lt;/P&gt;&lt;P&gt;delta.targetFileSize&lt;/P&gt;&lt;P&gt;delta.merge.enableLowShuffle&lt;/P&gt;&lt;P&gt;everything seems to be ignored and the files remain at this scale.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;note: running on DBR 10.0 / delta.optimizedWrites.enabled set to true&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is there anything that I am missing?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you in advance!&lt;/P&gt;</description>
      <pubDate>Thu, 04 Nov 2021 22:23:46 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11509#M6457</guid>
      <dc:creator>pantelis_mare</dc:creator>
      <dc:date>2021-11-04T22:23:46Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11510#M6458</link>
      <description>&lt;P&gt;Maybe the table size is +10TB?&lt;/P&gt;&lt;P&gt;If you use the autotune, delta lake uses a file size based on the table size:&lt;/P&gt;&lt;P&gt;&lt;A href="https://docs.microsoft.com/en-us/azure/databricks/delta/optimizations/file-mgmt#autotune-based-on-table-size" alt="https://docs.microsoft.com/en-us/azure/databricks/delta/optimizations/file-mgmt#autotune-based-on-table-size" target="_blank"&gt;https://docs.microsoft.com/en-us/azure/databricks/delta/optimizations/file-mgmt#autotune-based-on-table-size&lt;/A&gt;&lt;/P&gt;&lt;P&gt;However, the targetfilesize should disable the autotune... weird.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I use the following settings (which create files around 256MB):&lt;/P&gt;&lt;P&gt;spark.sql("set spark.databricks.delta.autoCompact.enabled = true") &lt;/P&gt;&lt;P&gt;spark.sql("set spark.databricks.delta.optimizeWrite.enabled = true") &lt;/P&gt;&lt;P&gt;spark.sql("set spark.databricks.delta.merge.enableLowShuffle = true")&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 08:56:47 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11510#M6458</guid>
      <dc:creator>-werners-</dc:creator>
      <dc:date>2021-11-05T08:56:47Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11511#M6459</link>
      <description>&lt;P&gt;Delta is transactional file (keeping incremental changes in jsons and snapshots in parquet) usually when I want performance I prefer just  use parquet.&lt;/P&gt;</description>
      <pubDate>Fri, 05 Nov 2021 14:24:24 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11511#M6459</guid>
      <dc:creator>Hubert-Dudek</dc:creator>
      <dc:date>2021-11-05T14:24:24Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11512#M6460</link>
      <description>&lt;P&gt;@Pantelis Maroudis​&amp;nbsp;, can you try setting &lt;B&gt;spark.databricks.delta.optimize.maxFileSize?&lt;/B&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 10 Nov 2021 19:37:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11512#M6460</guid>
      <dc:creator>Sandeep</dc:creator>
      <dc:date>2021-11-10T19:37:52Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11513#M6461</link>
      <description>&lt;P&gt;Hello, &lt;/P&gt;&lt;P&gt;Took some more time investigating and trying @Sandeep Chandran​&amp;nbsp; idea.&lt;/P&gt;&lt;P&gt;I ran 4 different configurations. I have cached the update table and each time I was running a restore on the target table so the data we merge are identical.&lt;/P&gt;&lt;P&gt;Here are the files produced by each run on my BIGGEST partition which is the one blocking the stage:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="files"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/2352i9CFA4EACD797FBE6/image-size/large?v=v2&amp;amp;px=999" role="button" title="files" alt="files" /&gt;&lt;/span&gt; run 1: &lt;/P&gt;&lt;P&gt;spark.databricks.delta.tuneFileSizesForRewrites: false&lt;/P&gt;&lt;P&gt;I suppose it uses file tuning on table size&lt;/P&gt;&lt;P&gt;run2:&lt;/P&gt;&lt;P&gt;spark.databricks.delta.tuneFileSizesForRewrites: false&lt;/P&gt;&lt;P&gt;spark.databricks.delta.optimize.maxFileSize: 268435456&lt;/P&gt;&lt;P&gt;run3:&lt;/P&gt;&lt;P&gt;spark.databricks.delta.tuneFileSizesForRewrites: false&lt;/P&gt;&lt;P&gt;delta.targetFileSize = 268435456  property on target table&lt;/P&gt;&lt;P&gt;run4:&lt;/P&gt;&lt;P&gt;spark.databricks.delta.tuneFileSizesForRewrites: true&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As an extra info here is the records per partition,. As you see my dataframe is highly unbalanced. &lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="count"&gt;&lt;img src="https://community.databricks.com/t5/image/serverpage/image-id/2355i5AD1F90968625DA0/image-size/large?v=v2&amp;amp;px=999" role="button" title="count" alt="count" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Nov 2021 13:21:32 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11513#M6461</guid>
      <dc:creator>pantelis_mare</dc:creator>
      <dc:date>2021-11-12T13:21:32Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11514#M6462</link>
      <description>&lt;P&gt;Hi @Pantelis Maroudis​&amp;nbsp;,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Are you still looking for help to solve this issue?&lt;/P&gt;</description>
      <pubDate>Tue, 07 Dec 2021 01:20:57 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11514#M6462</guid>
      <dc:creator>jose_gonzalez</dc:creator>
      <dc:date>2021-12-07T01:20:57Z</dc:date>
    </item>
    <item>
      <title>Re: Delta merge file size control</title>
      <link>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11515#M6463</link>
      <description>&lt;P&gt;Hello Jose,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I just went with splitting the merge in 2 so I have a merge that touches many partitions but few rows per file and a second that touches ​2-3 partitions but contain the build of the data.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Dec 2021 18:52:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/delta-merge-file-size-control/m-p/11515#M6463</guid>
      <dc:creator>pantelis_mare</dc:creator>
      <dc:date>2021-12-09T18:52:25Z</dc:date>
    </item>
  </channel>
</rss>

