3 weeks ago
Hi Community
Actually my requirement is simple , I need to drop the files into Azure data Lake gen 2 storage from Databricks.
But When I use
df.coalesce(1).write.csv("url to gen 2/stage/)
It's creating part .CSV file . But I need to rename to a custom name.
I have gone through work around using
dbutils.fs.cp()
It worked , but I have a thousands batch files to transfer like that with fustome name, So everytime it's creating a new job when zi do that .cp() operation and taking lots of time
Compared to direct push as part.csv.
Is there any work around. And I cant use other libs like Pandas or some others .Am not allowed .
Please Help me.
3 weeks ago
Hi @Philospher1425,
3 weeks ago
That's the reason why I asked for alternative workaround, I have tried my, anyway it doesn't reduce no of jobs.
Spark can add this tiny thing like , when we write like dr.write(/filename.csv) it should write with the given filename instead of creating the folder . I know this is very silly why it has not been done till today. I just an alternative, (without file operations please), as they add up the time. If it's not possible, just leave it . I will move on.
3 weeks ago
@Philospher1425,
The problem is, in order to generate a single .csv file you have to coalesce your dataset to one partition and lose all parallelism that spark provides. While this might work for small datasets, such pattern will certainly lead to memory issues on larger datasets.
If you think that the pattern you described is a good and valid idea, please submit your idea to https://github.com/apache/spark or Databricks Ideas Portal.
3 weeks ago
Ypp, totally agree with you dbutils.fs.mv is much faster and is the best way to rename files.
Excited to expand your horizons with us? Click here to Register and begin your journey to success!
Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!