The number of files written correspond to the number of partitions in the Spark dataframe. To reduce the number to 1 file, use coalesce():
sqlDF.coalesce(1).write.csv(<file-path>)...
Hey Nik,
Can you do a file listing on that directory ".../MyPathName/mydata.csv/" and post the names of the files here?
Your data should be located in the CSV file(s) that begin with "part-00000-tid-xxxxx.csv", with each partition in a separate c...