12-02-2015 10:26 AM
Using sparkcsv to write data to dbfs, which I plan to move to my laptop via standard s3 copy commands.
The default for spark csv is to write output into partitions. I can force it to a single partition, but would really like to know if there is a generic way to do this.
In a hadoop file system, I'd simply run something like
hadoop fs -getmerge /user/hadoop/dir1/ ./myoutput.txt
Any equivalent from within the databricks platform?
12-02-2015 10:26 AM
If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.
Something like:
df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")
then copy to a single file using a dbutils.fs command:
dbutils.fs.cp("...path...", "..path.. ..csv")
12-02-2015 10:26 AM
If the data isn't more than a few GB then you can coalesce your dataset prior to writing it out.
Something like:
df.coalesce(1).write.format("com.databricks.spark.cvs").save("...path...")
then copy to a single file using a dbutils.fs command:
dbutils.fs.cp("...path...", "..path.. ..csv")
08-18-2016 05:34 PM
Any tips if the data is more than a few GB? Obviously the concern is a call to coalesce will bring all data into drive memory.
03-31-2017 12:18 PM
THIS IS TERRIBLE ADVICE. DO NOT USE the DataFrame methods of .coalesce(1) or .repartition(1) except for very small data sets. Instead, use the hdfs merge mechanism via FileUtils.copyMerge(). This solution on StackOverflow correctly identifies how to do this:
07-27-2017 01:14 PM
See my embellishment of this answer, filling out the ...s in the "...path...":
12-02-2015 10:26 AM
Thanks Richard. That is useful for single files. I'll add it to our local docs. I ended up writing a shell script that downloads all parts and merges them locally, so that can remain an option for people with larger files.
03-31-2017 12:21 PM
Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:
03-31-2017 12:21 PM
Please look this StackOverflow answer for the most effective way to use the HDFS FileUtils.copyMerge() command:
12-24-2020 08:59 PM
Is FileUtils.copyMerge() is supported in databricks in DBFS?
07-27-2017 01:10 PM
If you can fit all the data into RAM on one worker (and thus can use .coalesce(1)), you can use dbfs to find and move the resulting CSV file:
val fileprefix= "/mnt/aws/path/file-prefix"dataset .coalesce(1)
.write //.mode("overwrite") // I usually don't use this, but you may want to..option("header", "true") .option("delimiter","\t") .csv(fileprefix+".tmp")
val partition_path = dbutils.fs.ls(fileprefix+".tmp/") .filter(file=>file.name.endsWith(".csv"))(0).path
dbutils.fs.cp(partition_path,fileprefix+".tab")
dbutils.fs.rm(fileprefix+".tmp",recurse=true)
If your file does not fit into RAM on the worker, you may want to consider chaoticequilibrium's suggestion to use FileUtils.copyMerge(). I have not done this, and don't yet know if is possible or not, e.g., on S3.
Sources:
07-27-2017 01:11 PM
I'm really miffed that my formatting of the code disappears when I commit the edit.
07-03-2019 04:09 AM
You need to set the recursive setting on the copy command. Matthew Gascoyne explained it in detail in one of his posts:
When trying to copy a folder from one location to another in Databricks you may write my paper tasks and run into the below message
01-21-2020 03:50 AM
Without access to bash it would be highly appreciated if an option within databricks (e.g. via dbfsutils) existed.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.
If there isn’t a group near you, start one and help create a community that brings people together.
Request a New Group