โ11-30-2023 03:39 AM
I have a dataframe with a schema similar to the following:
id: string
array_field: array
element: struct
field1: string
field2: string
array_field2: array
element: struct
nested_field: string
I am trying to flatten this into rows.
The issue that I am having is that when I do something like e_df = df.select("id", F.explode("array_field")) it is only returning the exploded values for the first id. I am not sure if this is something simple, but I have wasted a lot of time trying to sort out what the issue is. When I look every id has an associated array field and I would think that the result should be something like:
id, col
1, first element struct
1, second element struct
2, first element struct
2, second element struct
and so on. Any insight here would be very helpful.
โ12-04-2023 02:18 AM
It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.
โ11-30-2023 03:58 AM
@Retired_mod thanks for the tip. This is the approach I am trying to take, but the issue is that exploding the outer array fails. It only returns the 4 values associated with the first id and not all possible rows.
โ11-30-2023 04:01 AM
Just to make it clear, I have about 19,000 ids and when I explode I only get 4 rows that correspond to the first id.
โ11-30-2023 04:08 AM
@Retired_mod additionally I am getting a message that says something to the effect of:
"This query is on a non-Delta table with many small files. To improve the performance of queries, convert to Delta and run the OPTIMIZE command on the table dbfs:/mnt/bucket/id_1.json.", which makes it seem that it is only reading the file containing the first id for some reason.
โ12-04-2023 02:18 AM
It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group