Hi @dsugs thanks for posting here.
You need to use repartition(1) to write the single partition file into s3, then you have to move the single file by giving your file name in the destination_path.
You can use the below snippet:
output_df.repartition(1).write.format(file_format).mode(write_mode).option("header","true").option("inferSchema", "true").save(output_path)
fname = [y.name for y in dbutils.fs.ls(output_path) if y.name.startswith("part-")]
dbutils.fs.mv(output_path + "/" + fname[0],f"{output_path}.parquet")
dbutils.fs.rm(output_path)
# This code first gets a list of all the files in the output_path directory that # start with "part-". This is because Spark writes parquet files to the output_path
# directory in partitions, and we only want to move the first partition.
# The next line moves the first partition to a new file named output_path.parquet.
# Finally, the code deletes the output_path directory.
Hemant Soni