cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

How do I create a single CSV file from multiple partitions in Databricks / Spark?

rlgarris
Databricks Employee
Databricks Employee

Using sparkcsv to write data to dbfs, which I plan to move to my laptop via standard s3 copy commands.

The default for spark csv is to write output into partitions. I can force it to a single partition, but would really like to know if there is a generic way to do this.

In a hadoop file system, I'd simply run something like

hadoop fs -getmerge /user/hadoop/dir1/ ./myoutput.txt

Any equivalent from within the databricks platform?

1 ACCEPTED SOLUTION

Accepted Solutions

rlgarris
Databricks Employee
Databricks Employee

If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.

Something like:

df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")

then copy to a single file using a dbutils.fs command:

dbutils.fs.cp("...path...", "..path.. ..csv")

View solution in original post

12 REPLIES 12

rlgarris
Databricks Employee
Databricks Employee

If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.

Something like:

df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")

then copy to a single file using a dbutils.fs command:

dbutils.fs.cp("...path...", "..path.. ..csv")

Any tips if the data is more than a few GB? Obviously the concern is a call to coalesce will bring all data into drive memory.

THIS IS TERRIBLE ADVICE. DO NOT USE the DataFrame methods of .coalesce(1) or .repartition(1) except for very small data sets. Instead, use the hdfs merge mechanism via FileUtils.copyMerge(). This solution on StackOverflow correctly identifies how to do this:

http://stackoverflow.com/a/41785085/501113

JosiahYoder
New Contributor III

See my embellishment of this answer, filling out the ...s in the "...path...":

rlgarris
Databricks Employee
Databricks Employee

Thanks Richard. That is useful for single files. I'll add it to our local docs. I ended up writing a shell script that downloads all parts and merges them locally, so that can remain an option for people with larger files.

Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:

http://stackoverflow.com/a/41785085/501113

chaotic3quilibr
New Contributor III

Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:

http://stackoverflow.com/a/41785085/501113

Is FileUtils.copyMerge() is supported in databricks in DBFS?

JosiahYoder
New Contributor III

If you can fit all the data into RAM on one worker (and thus can use .coalesce(1)), you can use dbfs to find and move the resulting CSV file:

val fileprefix= "/mnt/aws/path/file-prefix"

dataset .coalesce(1)

.write

//.mode("overwrite") // I usually don't use this, but you may want to.

.option("header", "true") .option("delimiter","\t") .csv(fileprefix+".tmp")

val partition_path = dbutils.fs.ls(fileprefix+".tmp/") .filter(file=>file.name.endsWith(".csv"))(0).path

dbutils.fs.cp(partition_path,fileprefix+".tab")

dbutils.fs.rm(fileprefix+".tmp",recurse=true)

If your file does not fit into RAM on the worker, you may want to consider chaoticequilibrium's suggestion to use FileUtils.copyMerge(). I have not done this, and don't yet know if is possible or not, e.g., on S3.

Sources:

  • Stack Overflow: Writing single CSV file

I'm really miffed that my formatting of the code disappears when I commit the edit.

RandyBonnette
New Contributor II

You need to set the recursive setting on the copy command. Matthew Gascoyne explained it in detail in one of his posts:

When trying to copy a folder from one location to another in Databricks you may write my paper tasks and run into the below message

ChristianHomber
New Contributor II

Without access to bash it would be highly appreciated if an option within databricks (e.g. via dbfsutils) existed.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group