cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Explode is giving unexpected results.

BenLambert
Contributor

I have a dataframe with a schema similar to the following:

id: string

array_field: array

   element: struct

          field1: string

          field2: string

          array_field2: array

               element: struct

                     nested_field: string

I am trying to flatten this into rows. 

The issue that I am having is that when I do something like e_df = df.select("id", F.explode("array_field")) it is only returning the exploded values for the first id. I am not sure if this is something simple, but I have wasted a lot of time trying to sort out what the issue is. When I look every id has an associated array field and I would think that the result should be something like:

id, col

1, first element struct

1, second element struct

2, first element struct

2, second element struct 

and so on. Any insight here would be very helpful.

     

1 ACCEPTED SOLUTION

Accepted Solutions

BenLambert
Contributor

It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.

View solution in original post

4 REPLIES 4

@Retired_mod thanks for the tip. This is the approach I am trying to take, but the issue is that exploding the outer array fails. It only returns the 4 values associated with the first id and not all possible rows.

Just to make it clear, I have about 19,000 ids and when I explode I only get 4 rows that correspond to the first id.

@Retired_mod additionally I am getting a message that says something to the effect of:

"This query is on a non-Delta table with many small files. To improve the performance of queries, convert to Delta and run the OPTIMIZE command on the table dbfs:/mnt/bucket/id_1.json.", which makes it seem that it is only reading the file containing the first id for some reason.

BenLambert
Contributor

It turns out that if the exploded fields don't match the schema that was defined when reading the JSON in the first place that all the data that doesn't match is silently dropped. This is not really nice default behaviour.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group