<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Preferred compression format for ingesting large amounts of JSON files with Autoloader in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/59423#M31397</link>
    <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;sorry I guess my response wasn't sent. The source are JSON files that are uploaded to an S3 bucket. The sink will be a Delta Table and we are using autoloader.&lt;BR /&gt;The question was about the compression format of the incoming JSON files, e.g. if it would be better to compress them using gzip or bzip2 or any other format. The compression ratio is not considered, it is just a matter of performance.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/29880"&gt;@jose_gonzalez&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Tue, 06 Feb 2024 09:13:09 GMT</pubDate>
    <dc:creator>Volker</dc:creator>
    <dc:date>2024-02-06T09:13:09Z</dc:date>
    <item>
      <title>Preferred compression format for ingesting large amounts of JSON files with Autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/58600#M31210</link>
      <description>&lt;P&gt;&lt;SPAN&gt;Hello Databricks Community,&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;in an IOT context we plan to ingest a large amount of JSON files (~2 Million per Day). The JSON files are in json lines format und need to be compressed on the IOT devices. We can provide suggestions for the type of compression that is optimal for ingesting these files.&lt;BR /&gt;The internet resources that we found suggest different compression formats that all have their pros and cons. We have currently looked at gzip and bzip2 compressions and it looks like bzip2 could be more performant than gzip.&lt;/P&gt;&lt;P&gt;Does anyone have experience with such a usecase and could provide some arguments in favor of a certain compression format or recommend other compression formats?&lt;/P&gt;&lt;P&gt;Thanks in advance!&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jan 2024 11:24:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/58600#M31210</guid>
      <dc:creator>Volker</dc:creator>
      <dc:date>2024-01-29T11:24:05Z</dc:date>
    </item>
    <item>
      <title>Re: Preferred compression format for ingesting large amounts of JSON files with Autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/58642#M31225</link>
      <description>&lt;P&gt;Could you provide more details? for example, your source will be the JSON files, is your sink a Delta table? (assuming you will use auto loader to ingest your data) if that the case, then your Delta tables will be compressed already.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 29 Jan 2024 23:43:22 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/58642#M31225</guid>
      <dc:creator>jose_gonzalez</dc:creator>
      <dc:date>2024-01-29T23:43:22Z</dc:date>
    </item>
    <item>
      <title>Re: Preferred compression format for ingesting large amounts of JSON files with Autoloader</title>
      <link>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/59423#M31397</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;sorry I guess my response wasn't sent. The source are JSON files that are uploaded to an S3 bucket. The sink will be a Delta Table and we are using autoloader.&lt;BR /&gt;The question was about the compression format of the incoming JSON files, e.g. if it would be better to compress them using gzip or bzip2 or any other format. The compression ratio is not considered, it is just a matter of performance.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.databricks.com/t5/user/viewprofilepage/user-id/29880"&gt;@jose_gonzalez&lt;/a&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2024 09:13:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/preferred-compression-format-for-ingesting-large-amounts-of-json/m-p/59423#M31397</guid>
      <dc:creator>Volker</dc:creator>
      <dc:date>2024-02-06T09:13:09Z</dc:date>
    </item>
  </channel>
</rss>

