<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: INSERT OVERWRITE DIRECTORY in Get Started Discussions</title>
    <link>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/108278#M9716</link>
    <description>&lt;P&gt;The &lt;CODE&gt;DISTRIBUTE BY COALESCE(1)&lt;/CODE&gt; clause is intended to reduce the number of output files to one. However, this can lead to inefficiencies and large file sizes because it forces all data to be processed by a single task, which can cause memory and performance issues.&amp;nbsp;Instead of using &lt;CODE&gt;COALESCE(1)&lt;/CODE&gt;, consider using &lt;CODE&gt;REPARTITION(1)&lt;/CODE&gt;. This can help in better distributing the data and reducing the file size.&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;Applying compression to the CSV file can significantly reduce its size. You can use the &lt;CODE&gt;compression&lt;/CODE&gt; option to specify the desired compression codec (e.g., &lt;CODE&gt;gzip&lt;/CODE&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;</description>
    <pubDate>Sat, 01 Feb 2025 07:48:33 GMT</pubDate>
    <dc:creator>NandiniN</dc:creator>
    <dc:date>2025-02-01T07:48:33Z</dc:date>
    <item>
      <title>INSERT OVERWRITE DIRECTORY</title>
      <link>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/106513#M9714</link>
      <description>&lt;P&gt;I am using this query to create a csv in a volume named&amp;nbsp;&lt;SPAN&gt;test_volsrr&lt;/SPAN&gt; that i created&lt;/P&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;INSERT&lt;/SPAN&gt; &lt;SPAN&gt;OVERWRITE&lt;/SPAN&gt; &lt;SPAN&gt;DIRECTORY&lt;/SPAN&gt; &lt;SPAN&gt;'/Volumes/DATAMAX_DATABRICKS/staging/test_volsrr'&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;USING&lt;/SPAN&gt;&lt;SPAN&gt; CSV&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;OPTIONS&lt;/SPAN&gt;&lt;SPAN&gt; (&lt;/SPAN&gt;&lt;SPAN&gt;'delimiter'&lt;/SPAN&gt; &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;','&lt;/SPAN&gt;&lt;SPAN&gt;, &lt;/SPAN&gt;&lt;SPAN&gt;'header'&lt;/SPAN&gt; &lt;SPAN&gt;=&lt;/SPAN&gt; &lt;SPAN&gt;'true'&lt;/SPAN&gt;&lt;SPAN&gt;)&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;SELECT&lt;/SPAN&gt; &lt;SPAN&gt;*&lt;/SPAN&gt; &lt;SPAN&gt;FROM&lt;/SPAN&gt;&lt;SPAN&gt; staging.extract1gb&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;DISTRIBUTE&lt;/SPAN&gt; &lt;SPAN&gt;BY&lt;/SPAN&gt; &lt;SPAN&gt;COALESCE(&lt;/SPAN&gt;&lt;SPAN&gt;1&lt;/SPAN&gt;&lt;SPAN&gt;);&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV&gt;&lt;SPAN&gt;i added&amp;nbsp;DISTRIBUTE BY COALESCE(1) so that a single csv gets generated instead of multiple csvs , the size of&amp;nbsp;extract1gb table is 1gb but the csv which is getting created is around 230gb , due to this it is taking more than an hour to execute . Can some pls explain this issue and a solution to generate the csv of optimal size so that execution becomes faster . I dont want to use pyspark .&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Tue, 21 Jan 2025 14:55:50 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/106513#M9714</guid>
      <dc:creator>subhadeep</dc:creator>
      <dc:date>2025-01-21T14:55:50Z</dc:date>
    </item>
    <item>
      <title>Re: INSERT OVERWRITE DIRECTORY</title>
      <link>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/106878#M9715</link>
      <description>&lt;P class=""&gt;Hey,&lt;/P&gt;&lt;P class=""&gt;The issue you’re facing with the CSV file size being significantly larger than the original table is likely due to the serialization and formatting overhead when exporting the data. A good way to verify this would be to try exporting the same dataset using the Parquet format, which is more optimized for storage and performance.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P class=""&gt;You can also enable compression or&amp;nbsp;export only the necessary columns to minimize the data volume being written&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P class=""&gt;If you think this option is correct, please give it a&amp;nbsp;&lt;span class="lia-unicode-emoji" title=":thumbs_up:"&gt;👍&lt;/span&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P class=""&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 24 Jan 2025 08:17:28 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/106878#M9715</guid>
      <dc:creator>Isi</dc:creator>
      <dc:date>2025-01-24T08:17:28Z</dc:date>
    </item>
    <item>
      <title>Re: INSERT OVERWRITE DIRECTORY</title>
      <link>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/108278#M9716</link>
      <description>&lt;P&gt;The &lt;CODE&gt;DISTRIBUTE BY COALESCE(1)&lt;/CODE&gt; clause is intended to reduce the number of output files to one. However, this can lead to inefficiencies and large file sizes because it forces all data to be processed by a single task, which can cause memory and performance issues.&amp;nbsp;Instead of using &lt;CODE&gt;COALESCE(1)&lt;/CODE&gt;, consider using &lt;CODE&gt;REPARTITION(1)&lt;/CODE&gt;. This can help in better distributing the data and reducing the file size.&lt;/P&gt;
&lt;P class="_1t7bu9h1 paragraph"&gt;Applying compression to the CSV file can significantly reduce its size. You can use the &lt;CODE&gt;compression&lt;/CODE&gt; option to specify the desired compression codec (e.g., &lt;CODE&gt;gzip&lt;/CODE&gt;&lt;span class="lia-unicode-emoji" title=":disappointed_face:"&gt;😞&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 01 Feb 2025 07:48:33 GMT</pubDate>
      <guid>https://community.databricks.com/t5/get-started-discussions/insert-overwrite-directory/m-p/108278#M9716</guid>
      <dc:creator>NandiniN</dc:creator>
      <dc:date>2025-02-01T07:48:33Z</dc:date>
    </item>
  </channel>
</rss>

