I know how to use Spark in Databricks to create a CSV, but it always has lots of side effects.
For example, here is my code:
file_path = โdbfs:/mnt/target_folder/file.csvโ
df.write.mode("overwrite").csv(file_path, header=True)
Then what I got is
- A folder with name file.csv
- In the folder there are files called `_committed_xxxx`, โ_started_xxxxโ, โ_SUCCESSโ
- Multiple files with `part-xxxx`
What I want is only a SINGLE CSV file name with the name `file.csv`, how can I achieve this?
I tried to use pandas.to_csv function, but itโs not working on Databricks notebook, the error is โOSError: Cannot save file into a non-existent directoryโ