11-30-2023 03:39 AM
I have a dataframe with a schema similar to the following:
id: string
array_field: array
element: struct
field1: string
field2: string
array_field2: array
element: struct
nested_field: string
I am trying to flatten this into rows.
The issue that I am having is that when I do something like e_df = df.select("id", F.explode("array_field")) it is only returning the exploded values for the first id. I am not sure if this is something simple, but I have wasted a lot of time trying to sort out what the issue is. When I look every id has an associated array field and I would think that the result should be something like:
id, col
1, first element struct
1, second element struct
2, first element struct
2, second element struct
and so on. Any insight here would be very helpful.
12-04-2023 02:18 AM
It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.
11-30-2023 03:56 AM
Hi @BenLambert, It seems like you’re dealing with a nested structure in your DataFrame and want to flatten it into rows.
Let’s address this step by step.
To achieve the desired result, you’ll need to use the explode function twice: once for the outer array (array_field) and then for the inner array (array_field2).
Here’s how you can do it:
Make sure to replace the sample data with your actual DataFrame, and adjust the column names accordingly. This approach should work for flattening nested structures in your DataFrame.
Feel free to ask if you encounter any issues or need further assistance! 😊
11-30-2023 03:58 AM
@Kaniz_Fatma thanks for the tip. This is the approach I am trying to take, but the issue is that exploding the outer array fails. It only returns the 4 values associated with the first id and not all possible rows.
11-30-2023 04:01 AM
Just to make it clear, I have about 19,000 ids and when I explode I only get 4 rows that correspond to the first id.
11-30-2023 04:08 AM
@Kaniz_Fatma additionally I am getting a message that says something to the effect of:
"This query is on a non-Delta table with many small files. To improve the performance of queries, convert to Delta and run the OPTIMIZE command on the table dbfs:/mnt/bucket/id_1.json.", which makes it seem that it is only reading the file containing the first id for some reason.
12-04-2023 02:18 AM
It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group