cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Load parent columns and not unnest using pyspark? Found invalid character(s) ' ,;{}()\n' in schema

KristiLogos
New Contributor III

I'm not sure I'm working this correctly but I'm having some issues with the column names when I try to load to a table in our databricks catalog. I have multiple .json.gz files in our blob container that I want to load to a table:

df = spark.read.option("multiline", "true").json(f"{LOC}/*.json.gz")
df.printSchema()
 
The schema looks something like this, for example user_properties has nested values App Brnd and Archit
 

|-- user_id: string (nullable = true)
|-- user_properties: struct (nullable = true)
| |-- App Brnd: string (nullable = true)
| |-- Archit: string (nullable = true)
 
when I try to load the df to our table for the first time:

df.write.mode("overwrite").saveAsTable("test.events")
 
I see this error:
Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema. Please use other characters and try again.
1 ACCEPTED SOLUTION

Accepted Solutions

Hi @KristiLogos ,

Ok, now I understand the problem better. Set the Delta table property delta.columnMapping.mode to name to enable column mapping mode. Then try again.

https://kb.databricks.com/delta/allow-spaces-and-special-characters-in-nested-column-names-with-delt...

View solution in original post

9 REPLIES 9

szymon_dybczak
Contributor III

Hi @KristiLogos ,

Check if your JSON doesn't have characters contained in error message in it's key values. 

hi @szymon_dybczak - thanks for responding. The error is saying its in the column name. I think it has to do with the space that's in the 'child' column. However, I've been asked to not flatten the pyspark dataframe and just ingest the 'raw' data, ie. leave the nested columns nested and not flatten. I feel like that might be the issue because if I convert the pyspark to a pandas dataframe I'm able to load this without issues to my table. Is there a way to not 'flatten' and just load to pyspark dataframe?

Hi @KristiLogos ,

Could you show sample json?

I have run the tests, do you have a space in the App Brnd column?
Is App Brnd wrapped in double-quote?
Below the check
1. The first one are keys without double-quotes, space between App Brnd -> returns corrupt record

2. The second one are keys with double-quotes, no spaces -> returns valid json

filipniziol_0-1726853294524.png

 

hi @filipniziol - thanks for replying. I realized it might actually be a different nested (child) column that has a period for the event_properties. (I'm showing it below) This is most likely the issue, but how can i keep the nested values as nested and not split out for the event_properties?  I've believe what I could do is convert to a pandas dataframe becuase I'm able to load this as such but then i have to convert my Spark df to pandas df then convert back to spark df to load to a databricks table seems like alot.

  |-- event_properties: struct (nullable = true)

      | |-- Active.Elapsed Minutes: double (nullable = true)

      | |-- Active.Frequency: long (nullable = true)

      | |-- Active.Maximum: double (nullable = true)

      | |-- Active.Minimum: double (nullable = true)

This is what event_properties column looks like in a pandas dataframe which is what I want it to look like for spark

df_pd= df.toPandas()
 
event_properties
(None, 2, None, None, 1)
 

Hi @KristiLogos ,
now I understand the problem. 🙂
You have a json and then you want to save this to delta table as struct, right?
The allowed characters for delta table column names are:

Letters: A-Z, a-z
Digits: 0-9
Underscores: _

The key here is that your structs must also adhere to this naming convention
Before saving structs to table you need to sanitize the column names like that:

App Brnd -> App_Brnd
Active.Elapsed Minutes -> Active_Elapsed_Minutes

Hope it helps

@filipniziol 

Yes thats the issue I'm facing, Im so confused about this because, do i have to rename the child columns? I just want to keep the spark dataframe as it is without flattening, . and if i do this it only updates the parent column names:

 

def clean_column_names(df😞
    for col in df.columns:
        new_col_name = col.replace('$', '').replace('.', '_').replace(' ', '_').replace(';', '_').replace(',', '_')
        df = df.withColumnRenamed(col, new_col_name)
    return df


df_cleaned = clean_column_names(df)

# Check the cleaned column names
df_cleaned.printSchema()
df_cleaned.columns

Hi @KristiLogos ,

Ok, now I understand the problem better. Set the Delta table property delta.columnMapping.mode to name to enable column mapping mode. Then try again.

https://kb.databricks.com/delta/allow-spaces-and-special-characters-in-nested-column-names-with-delt...

amazing thanks! that worked flawlessly

I'm happy that it worked for you 🙂

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group