Space in Column names when writing to Hive
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-26-2023 05:40 PM
All,
I have the following code.
df_Warehouse_Utilization = (
spark.table("hive_metastore.dev_ork.bin_item_detail")
.join(df_DIM_Bins,col('bin_tag')==df_DIM_Bins.BinKey,'right')
.groupby(col('BinKey'))
.agg(count_distinct(when(col('serial_lot_tag')>0,col('serial_lot_tag'))).alias('SerialLotCount'),
count_distinct(when(col('tracking_tag')>0,col('tracking_tag'))).alias('Track Count'),
min(col('activation_time')).alias('OldestStorageTime')
)
.withColumn('BinUsedFlag',when(col('PartNumbers')>0,1).otherwise(0))
)
df_Warehouse_Utilization.write.mode("overwrite").format("delta").saveAsTable("yorkgold.df_DIM_Binsst")
display(df_Warehouse_Utilization)
when I write to table after the above transformation, the following error occurs.
AnalysisException: Found invalid character(s) among ' ,;{}()\n\t=' in the column names of your schema.
The issue is because I have the space in the column name (Track Count, please see the above highlighted), As soon as I rename column as (TrackCount), I am able to write to Hive.
So my question is, space is not allowed? Or is there any other option?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2023 10:50 PM
@JohnJustus You can try using backticks (`) to escape a column names that contains spaces, but the suggestion here is to get rid of spaces or any special characters in a column names.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-29-2023 05:44 PM
Thank you Daniel
So I have my code below with back stick but still doesn't work - col('bin_type').alias(`'Bin Type'`),
Can you please correct me?
df_Bins_Step1 = (
spark.table("hive_metastore.dev_fivetranpoc_york.bin_master")
.select(col('tag_number').alias('BinKey'),
col('bin').alias('Bin'),
col('bin_type').alias(`'Bin Type'`),
col('zone').alias('Zone'))
)
display(df_Bins_Step1)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2024 08:30 PM - edited 11-11-2024 08:47 PM
Hi,
I have faced this issue a few times. When we are overwriting the dataframes to hive catalog in databricks, it doesn't naturally allow for column names to have spaces or special characters. However, you can add an option statement to bypass that rule and save your dataframes as tables with column names having spaces or special characters.
Solution:
Let's say your dataframe name is 'df'.
The normal pyspark command for saving tables is: