One option is to use something other than Spark to read the problematic file, e.g. Pandas, if your file is small enough to fit on the driver node (Pandas will only run on the driver). If you have multiple files - you can loop through them and fix on...
Looks like it is a known issue/limitation due to Parquet internals, and it will not be fixed. Apparently there is no workaround in Spark.
https://issues.apache.org/jira/browse/SPARK-27442