cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

How do I create a single CSV file from multiple partitions in Databricks / Spark?

User16826991422
Contributor

Using sparkcsv to write data to dbfs, which I plan to move to my laptop via standard s3 copy commands.

The default for spark csv is to write output into partitions. I can force it to a single partition, but would really like to know if there is a generic way to do this.

In a hadoop file system, I'd simply run something like

hadoop fs -getmerge /user/hadoop/dir1/ ./myoutput.txt

Any equivalent from within the databricks platform?

1 ACCEPTED SOLUTION

Accepted Solutions

User16826991422
Contributor

If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.

Something like:

df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")

then copy to a single file using a dbutils.fs command:

dbutils.fs.cp("...path...", "..path.. ..csv")

View solution in original post

12 REPLIES 12

User16826991422
Contributor

If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.

Something like:

df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")

then copy to a single file using a dbutils.fs command:

dbutils.fs.cp("...path...", "..path.. ..csv")

Any tips if the data is more than a few GB? Obviously the concern is a call to coalesce will bring all data into drive memory.

THIS IS TERRIBLE ADVICE. DO NOT USE the DataFrame methods of .coalesce(1) or .repartition(1) except for very small data sets. Instead, use the hdfs merge mechanism via FileUtils.copyMerge(). This solution on StackOverflow correctly identifies how to do this:

http://stackoverflow.com/a/41785085/501113

See my embellishment of this answer, filling out the ...s in the "...path...":

User16826991422
Contributor

Thanks Richard. That is useful for single files. I'll add it to our local docs. I ended up writing a shell script that downloads all parts and merges them locally, so that can remain an option for people with larger files.

Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:

http://stackoverflow.com/a/41785085/501113

chaotic3quilibr
New Contributor III

Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:

http://stackoverflow.com/a/41785085/501113

Is FileUtils.copyMerge() is supported in databricks in DBFS?

JosiahYoder
New Contributor III

If you can fit all the data into RAM on one worker (and thus can use .coalesce(1)), you can use dbfs to find and move the resulting CSV file:

val fileprefix= "/mnt/aws/path/file-prefix"

dataset .coalesce(1)

.write

//.mode("overwrite") // I usually don't use this, but you may want to.

.option("header", "true") .option("delimiter","\t") .csv(fileprefix+".tmp")

val partition_path = dbutils.fs.ls(fileprefix+".tmp/") .filter(file=>file.name.endsWith(".csv"))(0).path

dbutils.fs.cp(partition_path,fileprefix+".tab")

dbutils.fs.rm(fileprefix+".tmp",recurse=true)

If your file does not fit into RAM on the worker, you may want to consider chaoticequilibrium's suggestion to use FileUtils.copyMerge(). I have not done this, and don't yet know if is possible or not, e.g., on S3.

Sources:

  • Stack Overflow: Writing single CSV file

I'm really miffed that my formatting of the code disappears when I commit the edit.

RandyBonnette
New Contributor II

You need to set the recursive setting on the copy command. Matthew Gascoyne explained it in detail in one of his posts:

When trying to copy a folder from one location to another in Databricks you may write my paper tasks and run into the below message

ChristianHomber
New Contributor II

Without access to bash it would be highly appreciated if an option within databricks (e.g. via dbfsutils) existed.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.