I am still on databricks runtine 12.2 LTS. Guess I'm using the same library for reading xml as the options are similar.
I'm using a custom python function to flatten the ingested df. The custom python func goes over all the columns of the input dataframe - if the column types are complex, i.e. struct or array - it continues to flatten it (explode if array, dot(.) operator if struct) until all the columns are simple types.
Something like:
df = spark.read.format('xml').load(path)
flattened_df = flatten_func(df)
flattened_df.write.format('parquet').save(destinationpath)