<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days! in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18176#M12010</link>
    <description>&lt;P&gt;@Sara Dooley​&amp;nbsp;any thoughts? &lt;/P&gt;</description>
    <pubDate>Tue, 21 Jun 2022 15:19:08 GMT</pubDate>
    <dc:creator>Kash</dc:creator>
    <dc:date>2022-06-21T15:19:08Z</dc:date>
    <item>
      <title>HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18163#M11997</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I was wondering if I could get your advise.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We would like to create a bronze delta table using GZ JSON data stored in S3 but each time we attempt to read and write it our clusters CPU spikes to 100%. We are not doing any transformations but simply reading from S3, creating a column with as_of_date and writing to S3 in Delta.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Currently it takes over 1 hour to read and write 20GB of GZ JSON from S3 using a server with 122GB of memory and 16 cores which is not efficient.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When we do ETL the data it also writes in tiny files 6.7MB-10MB.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Things we've tried:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;I initially thought the problem was due to the fact our data is GZ JSON and that is not splittable so spark has a tough time processing it but we don’t have this issue with other pipelines. (Still unclear if this is it)&lt;/LI&gt;&lt;LI&gt;I then thought it was a SKEW problem due to the .withColumn("as_of_date", col('metadata.timestamp').cast('date’)) but even after I removed it the problem still continued.&lt;/LI&gt;&lt;LI&gt;I even tried to add a SALT hint but no luck.&lt;/LI&gt;&lt;LI&gt;We tried to define the schema of the JSON as it is nested which helped it load faster but the write took just as long. (This is not ideal since our schemas change over time and defining it here causes us to lose data)&lt;/LI&gt;&lt;LI&gt;We've tried .repartition(1000)&lt;/LI&gt;&lt;LI&gt;We've also tried to let data bricks dynamically select what is skewed and then set that programmatically as the skew hint but no luck.&lt;/LI&gt;&lt;LI&gt;We turned on Autocompact, Auto Optimize to have it write in larger files but it did not do this and again wrote in smaller 10MB files.&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;source_path = ('s3://test-data/source/2022/06/03/*'.format(year, month, day))
test_data = spark.read.json(source_path).withColumn("as_of_date", col('metadata.timestamp').cast('date'))
test_data.createOrReplaceTempView('test_data')
&amp;nbsp;
 test_data.write.format('delta') \
  .partitionBy('as_of_date') \
  .option("mergeSchema", "true") \
  .option("path",'s3://test-data-bronze/').mode("append") \
  .saveAsTable('test_data.bronze')&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2022 13:49:15 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18163#M11997</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-09T13:49:15Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18164#M11998</link>
      <description>&lt;P&gt;"server with 122GB of memory and 16 cores"&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is this a single node cluster?&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2022 17:17:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18164#M11998</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-09T17:17:04Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18165#M11999</link>
      <description>&lt;P&gt;Yes in this example it was a single node cluster but we also tried to scale this to 5- 122GB servers and the CPU problem carried over to each server and the job did not speed up at all.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We tried to also mix up compute instances for divers and delta instances for the workers. &lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2022 18:04:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18165#M11999</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-09T18:04:11Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18166#M12000</link>
      <description>&lt;P&gt;Try using &lt;A href="https://docs.databricks.com/ingestion/auto-loader/index.html" alt="https://docs.databricks.com/ingestion/auto-loader/index.html" target="_blank"&gt;Autoloader&lt;/A&gt; and enabling the &lt;A href="https://docs.databricks.com/delta/optimizations/auto-optimize.html" alt="https://docs.databricks.com/delta/optimizations/auto-optimize.html" target="_blank"&gt;auto-optimize&lt;/A&gt; for the table property.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First try to use autoloader within &lt;A href="https://databricks.com/product/delta-live-tables" alt="https://databricks.com/product/delta-live-tables" target="_blank"&gt;Delta Live Tables&lt;/A&gt; to manage your ETL pipeline for you.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Otherwise, you can write it out in a notebook. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Below is a way to get your very small GZ JSON files streamed efficiently into Databricks from your S3 bucket &amp;amp; then written into a compact form so the rest of your pipeline should show performance improvement. I suggest you run this on more than one node. Larger, fewer machines is best than many small ones - see "Complex batch ETL" in &lt;A href="https://docs.databricks.com/clusters/cluster-config-best-practices.html" alt="https://docs.databricks.com/clusters/cluster-config-best-practices.html" target="_blank"&gt;this&lt;/A&gt; Best Practices for Cluster Configuration documentation.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;schema_location = &amp;lt;file_path_defined&amp;gt;
upload_path = &amp;lt;file_path_defined&amp;gt;
checkpoint_path = &amp;lt;file_path_defined&amp;gt;
write_path = &amp;lt;file_path_defined&amp;gt;
&amp;nbsp;
# Set up the stream to begin reading incoming files from the
bronze_df = spark.readStream.format('cloudFiles') \
  .option('cloudFiles.schemaLocation', schema_location) \
  .option("cloudFiles.maxFilesPerTrigger", &amp;lt;set to the number of cores in your cluster&amp;gt;) \
  .option('cloudFiles.format', 'json') \
  .load(upload_path)
&amp;nbsp;
# Start the stream.
bronze_df.writeStream \
  .format("delta")
  .option('checkpointLocation', checkpoint_path) \
  .table("user_data_bronze_not_compact")&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;If there are still performance issues, try &lt;A href="https://docs.databricks.com/delta/delta-streaming.html" alt="https://docs.databricks.com/delta/delta-streaming.html" target="_blank"&gt;MaxBytesPerTrigger&lt;/A&gt; instead of MaxFilesPerTrigger; may help since you are using GZ JSON and not straight JSON. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now make sure that all NEW delta tables will be compact automatically.&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;%sql
set spark.databricks.delta.properties.defaults.autoOptimize.optimizeWrite = true;
set spark.databricks.delta.properties.defaults.autoOptimize.autoCompact = true;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Now make the compact table:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;checkpoint_path2 = &amp;lt;file_path&amp;gt;
&amp;nbsp;
bronze_df.writeStream \
  .format("delta")
  .option('checkpointLocation', checkpoint_path2) \
  .table("user_data_bronze_compact")&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;From here you can read the compact delta table and you should see a better performance.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2022 22:02:52 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18166#M12000</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-09T22:02:52Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18167#M12001</link>
      <description>&lt;P&gt;The problem is that gzip isn't splittable.  It's going to be single threaded as it tries to uncompress it.  Try the option:&lt;/P&gt;&lt;P&gt;.option(“compression”, “gzip”)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jun 2022 23:41:42 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18167#M12001</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2022-06-09T23:41:42Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18168#M12002</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for the advise!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I set up the Autoloader script this morning in a notebook and it appears to transfer over files fairly quickly. I added .option("cloudFiles.inferColumnTypes", "true") in-order to detect the schema.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Questions:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;How do we save the user_data_bronze_not_compact to a s3 path partition it by (yyyy/mm/dd)? &lt;/LI&gt;&lt;LI&gt;How can we set it up so the Autoloader job only triggers once and stops when it has loaded all of the data in the s3 folder? &lt;/LI&gt;&lt;LI&gt;We want to run Autoloader once a day to process the previous day’s data. At the moment we use upload_path = ('s3://test-data/calendar/{}/{}/{}'.format(year, month, day)) to load data for a specific day. Is there a better way to do this with Autoloader? Backfill? Incremental? &lt;/LI&gt;&lt;LI&gt;In this query we run  load data from GZ JSON into Delta and store it to a table (not optimized). Since we do not specify the location the table in S3, where is this table stored?&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;When we optimize this data and store it in S3, we re-write it again so in essence we have 3 copies of this data now right?&amp;nbsp;If so, do we need to run number 3? or can we optimize step 2? &lt;OL&gt;&lt;LI&gt;JSON&amp;nbsp;&lt;/LI&gt;&lt;LI&gt;JSON TO DELTA (Not optimized)&lt;/LI&gt;&lt;LI&gt;DETLA to Optimized Delta (Optimized)&lt;/LI&gt;&lt;/OL&gt;&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for the help! &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Kash&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jun 2022 16:04:25 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18168#M12002</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-10T16:04:25Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18169#M12003</link>
      <description>&lt;OL&gt;&lt;LI&gt;I would suggest to not write the multitude of small parquet files to S3 since performance will be horrible compared to writing the delta format, less &amp;amp; larger file version of that same data - in our example that was called user_data_bronze_compact. I would not suggest partitioning any table less than 1TB and not to have a partition less than 1GB for performance reasons. Your write to S3 will be more efficient with the compact version of the table. You can try writing using foreachBatch() or foreach().&lt;/LI&gt;&lt;LI&gt;Then take that bronze dataframe and use the trigger once option. See Triggers &lt;A href="https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers" alt="https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#triggers" target="_blank"&gt;here&lt;/A&gt;.&lt;/LI&gt;&lt;LI&gt;Autoloader can backfill with an increment using "cloudFiles.backfillInterval"&lt;/LI&gt;&lt;LI&gt;You can find the location of the table in DESCRIBE TABLE EXTENDED user_data_bronze_compact at the bottom it says "location." you can see the files that make up that table using %fs and then ls file_path_you_grabbed_from_describe_table_extended_step&lt;/LI&gt;&lt;LI&gt;You can do the turn on auto optimize step before you start the stream &amp;amp; skip the middle checkpoint.&lt;/LI&gt;&lt;/OL&gt;</description>
      <pubDate>Fri, 10 Jun 2022 20:26:11 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18169#M12003</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-10T20:26:11Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18170#M12004</link>
      <description>&lt;P&gt;Also, turn off auto-compact if latency is an issue&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jun 2022 20:27:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18170#M12004</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-10T20:27:04Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18172#M12006</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I was able to set up a notebook but I am having difficulty getting it to backfill incrementally day by day. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I would like to process all of our 2022 data that is stored in a year/month/day format incrementally to reduce the load. I have a notebook setup that will iterate between a start_date_param and end_date_param so it run one job for each day to backfill. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When I use a specific upload path in AutoLoader for example &lt;/P&gt;&lt;P&gt;upload_path = ('s3://test-data/calendar/2022/01/01/*'.format(year, month, day)) I get this error. &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;java.lang.IllegalStateException: Found mismatched event: key calendar/2022/01/02/00-11-6a088a39-4180-4efe-852d-11d09e6c2eb8.json.gz doesn't have the prefix: calendar/2022/01/01/&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When I do not specify the year/month/day Autoloader tries to load the entire directory for 2022 rather than doing it incrementally. I see in the SparkUI it's trying to load 49K files.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How do we set it up so it loads data for the first day..writes it and partitions it by day then goes onto the next day?  &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I saw that you mentitoed that we should not partiton by year/month/day as that slows down the read but then our S3 directory will have tons of small files. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Lastly, how do we set it up to partition by 1GB rather than optimize and write it in 10MB chunks which is what it's doing now with Auto-optmize and auto-compact?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've also set .option('cloudFiles.backfillInterval', '1 day') \&lt;/P&gt;&lt;P&gt;and also tried .option('cloudFiles.backfillInterval', 1) \&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any thoughts?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you again for your help!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Avkash&lt;/P&gt;</description>
      <pubDate>Tue, 14 Jun 2022 17:57:04 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18172#M12006</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-14T17:57:04Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18173#M12007</link>
      <description>&lt;P&gt;Hi Kaniz,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for the note and thank you everyone for the suggestions and help. @Joseph Kambourakis​&amp;nbsp;I aded your suggestion to our load but I did not see any change in how our data loads or the time it takes to load data. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've done some additional research and one option for those that are having GZ problems could be to read the data as a text file which makes it read very quickly and then use spark to infer the schema. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However, in the end we decided to use Autoloader to load our data in...but we're still waiting for help on how to backfill a years worth of data incremental (day-by-day).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Let me know if you have any suggestions and I added more contex to my post above to User16460565755155528764&amp;nbsp;(Databricks)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;K&lt;/P&gt;</description>
      <pubDate>Wed, 15 Jun 2022 12:47:02 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18173#M12007</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-15T12:47:02Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18174#M12008</link>
      <description>&lt;P&gt;I would suggest to leave the autoloader to figure out the best way to backfill all your files instead of trying to do the increment yourself by your own schema to backfill. upload_path = ('s3://test-data/calendar/2022/01/01/*'.format(year, month, day)) &amp;lt;--the problem I see here is that you have no place you are putting the year, month, or day. Maybe you mean this?&lt;/P&gt;&lt;P&gt; upload_path = ('s3://test-data/calendar/{}/{}/{}/*'.format(year, month, day)) &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I mentioned that you should not repartition because with the code I specified before, you will have a compact version of a delta table that is made up of very large files. So the fact you have many small files after doing the auto-optimize step, auto-loader stream read &amp;amp; last checkpoint I mentioned is unusual. Did you do a &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;DESCRIBE TABLE EXTEND user_data_bronze_compact&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;and get the location? Did you then in the next cell do an &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;%fs 
ls file_path_of_delta_tabel &lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;to see the size of the files? What are the sizes you are seeing?&lt;/P&gt;</description>
      <pubDate>Wed, 15 Jun 2022 18:44:03 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18174#M12008</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-15T18:44:03Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18175#M12009</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for getting back to me. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You are correct that typically we leave the upload path like this upload_path = ('s3://test-data/calendar/{}/{}/{}/*'.format(year, month, day)) and each day we fill in the date programmatically in order to minimize the data read from S3. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It appears that Autoloader has a better way of doing this and for the backfill we did leave the path empty so Autoloader could identify all buckets but my concern was that Autoloader was trying to load 2TB in memory rather than read and write incrementally. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is if we leave the path /* so Autoloader can determine the best way to read/write then why does it try to load all of the data at once? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;With regard to repartition. Each folder is 20GB compressed in size and when Autoloader reads the data and writes it back out it only writes it in 10MB chunks. We have autocompact and auto optimize turned on and it still does this. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;%sql
set spark.databricks.delta.properties.defaults.autoOptimize.optimizeWrite = true;
set spark.databricks.delta.properties.defaults.autoOptimize.autoCompact = true;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;Could the problem be that when I'm doing the optimized write I am partioning by as_of_date which is making autoloader read all the data in and then write? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;here is the current code we have in place&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;upload_path = ('s3://test-data/calendar/*/*/*').format(year,month,day)
write_path = 's3://test-data-bronze/not_optimized/'
&amp;nbsp;
schema_location = 's3://test-autoloader/not_optimized/'
checkpoint_path = 's3://test-autoloader/not_optimized/'
&amp;nbsp;
# Set up the stream to begin reading incoming files from the
bronze_df = spark.readStream.format('cloudFiles') \
  .option('cloudFiles.schemaLocation', schema_location) \
  .option("cloudFiles.maxFilesPerTrigger", 16) \
  .option("cloudFiles.inferColumnTypes", "true") \
  .option('cloudFiles.format', 'json') \
  .option('cloudFiles.backfillInterval', 1) \
  .load(upload_path) 
&amp;nbsp;
# Start the stream.
bronze_df.writeStream.foreachBatch(bronze_df) \
&amp;nbsp;
checkpoint_path2 = 's3://test-autoloader/optimized/'
bronze_df.withColumn("as_of_date",col('metadata.timestamp').cast('date')) \
  .writeStream \
  .format('delta') \
  .trigger(once=True) \
  .partitionBy('as_of_date') \
  .option('checkpointLocation', checkpoint_path2) \
  .option("path",'s3://test-data-bronze/optimized/') \
  .table('test-data.calendar_bronze')&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Would it be possible to get on a 10 min call where I can show you the issue via a screen share? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This will be an extremely expensive ETL for us so we would like to get it right.&lt;/P&gt;</description>
      <pubDate>Wed, 15 Jun 2022 18:58:56 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18175#M12009</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-15T18:58:56Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18176#M12010</link>
      <description>&lt;P&gt;@Sara Dooley​&amp;nbsp;any thoughts? &lt;/P&gt;</description>
      <pubDate>Tue, 21 Jun 2022 15:19:08 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18176#M12010</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-21T15:19:08Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18177#M12011</link>
      <description>&lt;P&gt;So yes, Autoloader writes smaller files and then when you do that checkpoint it will compact those small files to larger files so your queries and processing will be more efficient. You have to set those flags in the beginning for that checkpoint to compact the data into larger files. This is necessary for the rest of your pipeline's efficient with that data.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Did I answer your question?&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jun 2022 15:50:38 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18177#M12011</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-27T15:50:38Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18178#M12012</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for getting back to me. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is regarding backfilling and loading HISTORICAL data incrementally (day by day) using AutoLoader. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I would like to run Autoloader on data that is partitioned by year/month/day. I would like Autoloader to read this data incrementally and then write it incrementally to prevent CPU overloading and other memory issues. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When I run Autoloader today using the setup above, I see in the SparkUI that it is trying to load the entire 1TB s3 bucket into memory rather than reading it day-by-day (incrementally.)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do I have the backfill setup incorrectly or am I missing something that can make Autoloader backfill daily first?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Avkash&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jun 2022 17:59:46 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18178#M12012</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-06-27T17:59:46Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18179#M12013</link>
      <description>&lt;P&gt;Are you running this latest run on a single node still? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do you have 2 TB in the S3 bucket you want fed into Databricks to do processing on the data and then any new data that ends up in that S3 bucket?&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jun 2022 21:35:39 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18179#M12013</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-06-27T21:35:39Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18180#M12014</link>
      <description>&lt;P&gt;Yes exactly. Do we have the Backfill setup correctly for Autoloader?&lt;/P&gt;</description>
      <pubDate>Wed, 06 Jul 2022 14:34:17 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18180#M12014</guid>
      <dc:creator>Kash</dc:creator>
      <dc:date>2022-07-06T14:34:17Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18181#M12015</link>
      <description>&lt;P&gt;Just a quick check. Did you try these paths with dbfs:/ paths instead of these S3 paths? Keep the upload one the way it is, but try changing these ones to a dbfs:/ path. Maybe dbfs:/user/your_user_email/?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;write_path = 's3://test-data-bronze/not_optimized/'
schema_location = 's3://test-autoloader/not_optimized/'
checkpoint_path = 's3://test-autoloader/not_optimized/'&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 07 Jul 2022 15:04:05 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18181#M12015</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-07-07T15:04:05Z</dc:date>
    </item>
    <item>
      <title>Re: HELP! Converting GZ JSON to Delta causes massive CPU spikes and ETL's take days!</title>
      <link>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18182#M12016</link>
      <description>&lt;PRE&gt;&lt;CODE&gt;upload_path = '/mnt/test-data/calendar/'
 
schema_location ='Users/your_email_address/project_directory_name/_checkpoints'
checkpoint_path = 'Users/your_email_address/project_directory_name/_checkpoints'
&amp;nbsp;
# Set up the stream to begin reading incoming files from the
bronze_df = spark.readStream.format('cloudFiles') \
  .option('cloudFiles.schemaLocation', schema_location) \
  .option("cloudFiles.maxFilesPerTrigger", 16) \
  .option("cloudFiles.inferColumnTypes", "true") \
  .option('cloudFiles.format', 'json') \
  .option('cloudFiles.backfillInterval', "1 day") \
  .load(upload_path) 
&amp;nbsp;
# Start the stream.
bronze_df.writeStream \
.format("delta")
.option('checkpointLocation', checkpoint_path) \
.table("user_data_bronze_compact")&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Here is my thinking. Assuming you &lt;A href="https://docs.databricks.com/data/data-sources/aws/amazon-s3.html#mount-an-s3-bucket" alt="https://docs.databricks.com/data/data-sources/aws/amazon-s3.html#mount-an-s3-bucket" target="_blank"&gt;mounted&lt;/A&gt; the S3 bucket you want to load from, it will be in the dbfs file system under mnt as you see in this code above. Then you save the data &amp;amp; schema in the dbfs file system as well - maybe under Users.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For the write, I took out all the other stuff you had in there so we keep processing to a minimum to just bring the data into Delta. THEN you can do the timestamp casting. I highly suggest you do not repartition while doing this process since that will be very expensive &amp;amp; unnecessary as explained above with the autoOptimize optimizeWrite flag being turned on before you do this code above &amp;amp; autoCompact. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also, the backfill will be expensive at first because it needs to bring all the data (which is larger) into a delta format but after that point, any NEW files will only be ingested so the processing will be less later on. However, you are going to have CPU overload issues if you are staying to that &lt;B&gt;single&lt;/B&gt; node way of reading in data. You need to do a distributed read in so make a cluster with a driver &amp;amp; workers with 2-8 &amp;amp; watch your Ganglia Metric (under cluster information with the "metrics" tab) help us troubleshoot moving forward. Once you update that cluster then update the MaxFilesPerTrigger to the total number of cores in the total number of cores of the workers.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Let me know.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Jul 2022 15:29:14 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/help-converting-gz-json-to-delta-causes-massive-cpu-spikes-and/m-p/18182#M12016</guid>
      <dc:creator>Dooley</dc:creator>
      <dc:date>2022-07-07T15:29:14Z</dc:date>
    </item>
  </channel>
</rss>

