cancel
Showing results for 
Search instead for 
Did you mean: 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results for 
Search instead for 
Did you mean: 

Space in Column names when writing to Hive

JohnJustus
New Contributor III

All,

I have the following code.
df_Warehouse_Utilization = (

    spark.table("hive_metastore.dev_ork.bin_item_detail")
    .join(df_DIM_Bins,col('bin_tag')==df_DIM_Bins.BinKey,'right')
    .groupby(col('BinKey'))
    .agg(count_distinct(when(col('serial_lot_tag')>0,col('serial_lot_tag'))).alias('SerialLotCount'),
         count_distinct(when(col('tracking_tag')>0,col('tracking_tag'))).alias('Track Count'),
          min(col('activation_time')).alias('OldestStorageTime')
    )
    .withColumn('BinUsedFlag',when(col('PartNumbers')>0,1).otherwise(0))

)
df_Warehouse_Utilization.write.mode("overwrite").format("delta").saveAsTable("yorkgold.df_DIM_Binsst")
display(df_Warehouse_Utilization)

when I write to table after the above transformation, the following error occurs.
AnalysisException: Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.

The issue is because I have the space in the column name (Track Count, please see the above highlighted), As soon as I rename column as (TrackCount), I am able to write to Hive.
So my question is, space is not allowed? Or is there any other option?

3 REPLIES 3

daniel_sahal
Esteemed Contributor

@JohnJustus You can try using backticks (`) to escape a column names that contains spaces, but the suggestion here is to get rid of spaces or any special characters in a column names.

Thank you Daniel

So I have my code below with back stick but still doesn't work -   col('bin_type').alias(`'Bin Type'`),

Can you please correct me?

df_Bins_Step1 = (
    spark.table("hive_metastore.dev_fivetranpoc_york.bin_master")
    .select(col('tag_number').alias('BinKey'),
        col('bin').alias('Bin'),
        col('bin_type').alias(`'Bin Type'`),
        col('zone').alias('Zone'))
)
display(df_Bins_Step1)

KandyKad
New Contributor III

Hi,

I have faced this issue a few times. When we are overwriting the dataframes to hive catalog in databricks, it doesn't naturally allow for column names to have spaces or special characters. However, you can add an option statement to bypass that rule and save your dataframes as tables with column names having spaces or special characters.

Solution:

Let's say your dataframe name is 'df'.

The normal pyspark command for saving tables is: 

 >> df.write.mode("overwrite").format("delta").saveAsTable("catalog.schema_name.table_name")
 
This will give you the AnalysisException: Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema error.
 
You need to add an 'option statement' - ".option("delta.columnMapping.mode", "name")" to the above command after overwrite, to resolve it.
 
The final working command would be:
>>  df.write.mode("overwrite").option("delta.columnMapping.mode", "name").format("delta").saveAsTable("catalog.schema_name.table_name")
 
The above edit, resolves the issue for me and the table is perfectly saved even if it has spaces or special characters in the schema.
 
Hope it works for you as well. If it does, happy to help!!
 
KandyKad

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group