cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Invalid characters in column name

WynanddB
New Contributor III

I get the following error   com.databricks.sql.transaction.tahoe.DeltaAnalysisException: [DELTA_INVALID_CHARACTERS_IN_COLUMN_NAMES] Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.

It's a new instance of databricks and I've checked the CSV headers. They are all valid with no special characters in the column names.

This is my code

device_path = "dbfs:/mnt/dblakehouse/RawLanding/ysoft/device"
(spark.readStream
    .format("cloudFiles")
    .option("cloudFiles.format", "csv")
    .option("cloudFiles.inferColumnTypes", "true")
    .option("cloudFiles.schemaLocation", f"{device_path}/checkpointLocation")
    .load(f"{device_path}/")
    .writeStream
    .option("checkpointLocation", f"{device_path}/checkpointLocation")
    .option("mergeSchema", "true")
    .outputMode("append")
    .toTable("printing_poc01.bronze.smartq_devices")
    )

 

4 REPLIES 4

Rishabh-Pandey
Esteemed Contributor

Steps to Debug and Resolve

  1. Check for Hidden Characters:

    • Sometimes, non-visible or control characters can cause issues. It's useful to print out the headers and inspect them closely or use a tool to reveal hidden characters.
    • You can use Python or a text editor to load and print the column names to check for any hidden characters.
  2. Explicitly Specify the Schema:

    • Instead of relying on the schema inference, manually specify the schema. This approach can help bypass issues with inferred column names.

  3. Inspect the First Batch of Data:
  4. Sanitize Column Names:

    • If there are invalid characters, you can rename the columns to valid names programmatically:

  5. Check for Delta Lake Configuration:

    • Ensure that Delta Lake configurations and settings are correct and there are no conflicting options that might affect schema validation.
  6. CSV Reader Settings:

    • Double-check the options being passed to the CSV reader, such as handling of delimiters, escape characters, etc., to ensure they don't cause issues with column names.
Rishabh Pandey

I might have to specify the schema. Have done all the other options. Thanks for responding.

jacovangelder
Honored Contributor

My guess is you have a new line character (\n) in one of the CSV header columns. You don't very easily spot them. Have you checked for that? You can also try .option("header","true") so Spark doesn't think of your header as content. Might also want to set the delimiter right using .option("delimiter", "<delimiter char here>")

Good luck

Hi I've checked for the new line. Will try specifying the delimiter. Thanks for responding

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group