<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to Efficiently Read Nested JSON in PySpark? in Data Engineering</title>
    <link>https://community.databricks.com/t5/data-engineering/how-to-efficiently-read-nested-json-in-pyspark/m-p/27149#M19029</link>
    <description>&lt;P&gt;I'm interested in seeing what others have come up with. Currently I'm using Json. normalize() then taking any additional nested statements and using a loop to pull them out -&amp;gt; re-combine them. &lt;/P&gt;</description>
    <pubDate>Mon, 21 Feb 2022 19:00:10 GMT</pubDate>
    <dc:creator>Chris_Shehu</dc:creator>
    <dc:date>2022-02-21T19:00:10Z</dc:date>
    <item>
      <title>How to Efficiently Read Nested JSON in PySpark?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-efficiently-read-nested-json-in-pyspark/m-p/27147#M19027</link>
      <description>&lt;P&gt;&lt;/P&gt;
&lt;P&gt;I am having trouble efficiently reading &amp;amp; parsing in a large number of stream files in Pyspark!&lt;/P&gt;
&lt;B&gt;Context&lt;/B&gt;
&lt;P&gt;Here is the schema of the stream file that I am reading in JSON. Blank spaces are edits for confidentiality purposes.&lt;/P&gt;
&lt;PRE&gt;&lt;CODE&gt;root
 |-- location_info: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- restaurant_type: string (nullable = true)
 |    |    |
 |    |    |
 |    |    |-- other_data: array (nullable = true)
 |    |    |    |-- element: struct (containsNull = true)
 |    |    |    |    |-- other_data_1 string (nullable = true)
 |    |    |    |    |-- other_data_2: string (nullable = true)
 |    |    |    |    |-- other_data_3: string (nullable = true)
 |    |    |    |    |-- other_data_4: string (nullable = true)
 |    |    |    |    |-- other_data_5: string (nullable = true)
 |    |    |
 |    |    |-- latitude: string (nullable = true)
 |    |    |-- longitude: string (nullable = true)
 |    |    |-- timezone: string (nullable = true)
 |-- restaurant_id: string (nullable = true)&lt;/CODE&gt;&lt;/PRE&gt;
&lt;B&gt;Current Method of Reading &amp;amp; Parsing (which works but takes TOO long)&lt;/B&gt;
&lt;UL&gt;&lt;LI&gt;Although the following method works and is itself a solution to even getting started reading in the files, this method takes very long when the number of files increases in the thousands&lt;/LI&gt;&lt;LI&gt;Each file size is around 10MB&lt;/LI&gt;&lt;LI&gt;The files are essential "stream" files and have names like this &lt;PRE&gt;&lt;CODE&gt;s3://bucket_name/raw/2020/03/05/04/file-stream-6-2020-03-05-04-01-04-123-b978-2e2b-5672-fa243fs4aeb4&lt;/CODE&gt;&lt;/PRE&gt;. Therefore I read it in as a JSON in Pyspark (not sure what else I would read it in as anyway?) 
  &lt;UL&gt;&lt;LI&gt;If you notice I call for replacing the &lt;B&gt;restaurant_id&lt;/B&gt; with &lt;B&gt;'\n{"restaurant_id'&lt;/B&gt;, this is because if I don't then the read operation only reads in the first record in the file, and ignores the other contents...&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;/UL&gt;
&lt;P&gt;```&lt;/P&gt;# Reading multiple files in the dir source_df_1 = spark.read.json(sc.wholeTextFiles("file_path/*").values().flatMap(lambda x: x .replace('{"restaurant_id','\n{"restaurant_id').split('\n')))# explode here to have restaurant_id, and nested data exploded_source_df_1 = source_df_1.select(col('restaurant_id'), explode(col('location_info')).alias('location_info'))# Via SQL operation : this will solve the problem for parsing exploded_source_df_1.createOrReplaceTempView('result_1')
&lt;P&gt;&lt;/P&gt; 
&lt;P&gt;subset_data_1 = spark.sql(''' SELECT restaurant_id, location_infos.latitude,location_infos.longitude,location_infos.timezone from result_1 ''')&lt;/P&gt;
&lt;P&gt;```&lt;/P&gt;
&lt;B&gt;Things I would love help with:&lt;/B&gt;
&lt;UL&gt;&lt;LI&gt;(1) Is there a faster way I can read this in?&lt;/LI&gt;&lt;LI&gt;(2) If I try and cache / persist the data frame, when would I be able to do it since it seems like the &lt;PRE&gt;&lt;CODE&gt;.values().flatMap(lambda x: x.replace('{"restaurant_id','\n{"restaurant_id' )&lt;/CODE&gt;&lt;/PRE&gt; is an action in itself so if I call persist() at the end it seems to redo the entire read?&lt;/LI&gt;&lt;/UL&gt;
&lt;P&gt;You can refer to this thread as to how I arrived at this solution in the first place: link. Thank you very much for your time&lt;/P&gt; 
&lt;P&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 17 Jun 2020 01:18:09 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-efficiently-read-nested-json-in-pyspark/m-p/27147#M19027</guid>
      <dc:creator>DarshilDesai</dc:creator>
      <dc:date>2020-06-17T01:18:09Z</dc:date>
    </item>
    <item>
      <title>Re: How to Efficiently Read Nested JSON in PySpark?</title>
      <link>https://community.databricks.com/t5/data-engineering/how-to-efficiently-read-nested-json-in-pyspark/m-p/27149#M19029</link>
      <description>&lt;P&gt;I'm interested in seeing what others have come up with. Currently I'm using Json. normalize() then taking any additional nested statements and using a loop to pull them out -&amp;gt; re-combine them. &lt;/P&gt;</description>
      <pubDate>Mon, 21 Feb 2022 19:00:10 GMT</pubDate>
      <guid>https://community.databricks.com/t5/data-engineering/how-to-efficiently-read-nested-json-in-pyspark/m-p/27149#M19029</guid>
      <dc:creator>Chris_Shehu</dc:creator>
      <dc:date>2022-02-21T19:00:10Z</dc:date>
    </item>
  </channel>
</rss>

