spark.conf.set("spark.sql.sources.commitProtocolClass", "org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol")
spark.conf.set("parquet.enable.summary-metadata", "false")
spark.conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
There parameters avoid writing any metadata files.
The fact you have multiple csv files is the result of parallel processing. If you do not want that you will have to add coalesce(1) to your write statement.
But that will impact the performance of your spark code.