cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

When I save a Spark dataframe using df.write.format("csv"), I end up with mulitple csv files. Why is this happening?

User16826992666
Valued Contributor
 
1 ACCEPTED SOLUTION

Accepted Solutions

aladda
Databricks Employee
Databricks Employee

You get multiple files in a folder because spark writes each shuffle partition in-place out to a "part..." file to avoid network I/O. You can use coalesce to bring all the shuffles into a single partition and write it out to a single file but be mindful of the performance implications

df.coalesce(1)
   .write.format(csv)
   .option("header", "true")
   .save("singlefile.csv")

View solution in original post

3 REPLIES 3

aladda
Databricks Employee
Databricks Employee

You get multiple files in a folder because spark writes each shuffle partition in-place out to a "part..." file to avoid network I/O. You can use coalesce to bring all the shuffles into a single partition and write it out to a single file but be mindful of the performance implications

df.coalesce(1)
   .write.format(csv)
   .option("header", "true")
   .save("singlefile.csv")

User16826994223
Honored Contributor III

Just use

df.coalesce(1).write.csv("File,path")

df.repartition(1).write.csv("file path)

When you are ready to write a DataFrame, first use Spark repartition() and coalesce() to merge data from all partitions into a single partition and then save it to a file. This still creates a directory and write a single part file inside a directory instead of multiple part files.

Both coalesce() and repartition() are Spark Transformation operations that shuffle the data from multiple partitions into a single partition. Use coalesce() as it performs better and uses lesser resources compared with repartition().

Note: You have to be very careful when using Spark coalesce() and repartition() methods on larger datasets as they are expensive operations and could throw OutOfMemory errors.

brickster_2018
Databricks Employee
Databricks Employee

This is by design and working as expected. Spark writes the data distributedly.

use of coalesce (1) can help to generate one file, however this solution is not scalable for large data set as it involves bringing the data to one single task.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group