It sounds like Spark is splitting your output into many small files (one per row) despite coalesce(1). Can you try setting spark.sql.files.maxRecordsPerFile , this limits how many records can be written into a single output file; if this is set to 1 (or any positive number), Spark will create a new file each time the limit is reached, regardless of partition count from coalesce()
(table.coalesce(1)
.write
.mode("overwrite")
.format(file_format) # likely "csv"
.option("header", "true")
.option("delimiter", field_delimiter)
.option("compression", "gzip")
.option("maxRecordsPerFile", 0) # disable row-per-file split
.save(temp_path))
But can you be more specific on the issue?