Invalid characters in column name
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2024 04:02 AM
I get the following error com.databricks.sql.transaction.tahoe.DeltaAnalysisException: [DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES] Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.
It's a new instance of databricks and I've checked the CSV headers. They are all valid with no special characters in the column names.
This is my code
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2024 04:38 AM
Steps to Debug and Resolve
Check for Hidden Characters:
- Sometimes, non-visible or control characters can cause issues. It's useful to print out the headers and inspect them closely or use a tool to reveal hidden characters.
- You can use Python or a text editor to load and print the column names to check for any hidden characters.
Explicitly Specify the Schema:
Instead of relying on the schema inference, manually specify the schema. This approach can help bypass issues with inferred column names.
- Inspect the First Batch of Data:
Sanitize Column Names:
If there are invalid characters, you can rename the columns to valid names programmatically:
Check for Delta Lake Configuration:
- Ensure that Delta Lake configurations and settings are correct and there are no conflicting options that might affect schema validation.
CSV Reader Settings:
- Double-check the options being passed to the CSV reader, such as handling of delimiters, escape characters, etc., to ensure they don't cause issues with column names.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2024 05:13 AM
I might have to specify the schema. Have done all the other options. Thanks for responding.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2024 04:51 AM - edited 06-19-2024 04:52 AM
My guess is you have a new line character (\n) in one of the CSV header columns. You don't very easily spot them. Have you checked for that? You can also try .option("header","true") so Spark doesn't think of your header as content. Might also want to set the delimiter right using .option("delimiter", "<delimiter char here>")
Good luck
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2024 05:15 AM
Hi I've checked for the new line. Will try specifying the delimiter. Thanks for responding