One option is to use something other than Spark to read the problematic file, e.g. Pandas, if your file is small enough to fit on the driver node (Pandas will only run on the driver). If you have multiple files - you can loop through them and fix one-by-one.
import pandas as pd
df = pd.read_parquet('//dbfs/path/to/your/file.parquet')
df = df.rename(columns={
"Column One" : "col_one",
"Column Two" : "col_two"
})
dfSpark = spark.createDataFrame(df) # convert to Spark dataframe
df.to_parquet('//dbfs/path/to/your/fixed/file.parquet') # and/or save fixed Parquet