- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2020 10:34 AM
Hello,
I am facing trouble as mentioned in following topics in stackoverflow,
I have tried all the solution mentioned there, but I am getting same error every time. Its like spark cannot read fields with space in them.
So, I am trying to find any other solution just to rename my fields, and save the parquet files back. After that I will continue my transformation with spark.
Anyone can help me out.. Loads of love and thanks 🙂
- Labels:
-
Parquet file writes
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2020 04:48 AM
One option is to use something other than Spark to read the problematic file, e.g. Pandas, if your file is small enough to fit on the driver node (Pandas will only run on the driver). If you have multiple files - you can loop through them and fix one-by-one.
import pandas as pd
df = pd.read_parquet('//dbfs/path/to/your/file.parquet')
df = df.rename(columns={
"Column One" : "col_one",
"Column Two" : "col_two"
})
dfSpark = spark.createDataFrame(df) # convert to Spark dataframe
df.to_parquet('//dbfs/path/to/your/fixed/file.parquet') # and/or save fixed Parquet
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2020 04:38 AM
Looks like it is a known issue/limitation due to Parquet internals, and it will not be fixed. Apparently there is no workaround in Spark.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2020 04:48 AM
One option is to use something other than Spark to read the problematic file, e.g. Pandas, if your file is small enough to fit on the driver node (Pandas will only run on the driver). If you have multiple files - you can loop through them and fix one-by-one.
import pandas as pd
df = pd.read_parquet('//dbfs/path/to/your/file.parquet')
df = df.rename(columns={
"Column One" : "col_one",
"Column Two" : "col_two"
})
dfSpark = spark.createDataFrame(df) # convert to Spark dataframe
df.to_parquet('//dbfs/path/to/your/fixed/file.parquet') # and/or save fixed Parquet
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)
![](/skins/images/B38AF44D4BD6CE643D2A527BE673CCF6/responsive_peak/images/icon_anonymous_message.png)