06-06-2024 04:39 AM
Hi Community
Actually my requirement is simple , I need to drop the files into Azure data Lake gen 2 storage from Databricks.
But When I use
df.coalesce(1).write.csv("url to gen 2/stage/)
It's creating part .CSV file . But I need to rename to a custom name.
I have gone through work around using
dbutils.fs.cp()
It worked , but I have a thousands batch files to transfer like that with fustome name, So everytime it's creating a new job when zi do that .cp() operation and taking lots of time
Compared to direct push as part.csv.
Is there any work around. And I cant use other libs like Pandas or some others .Am not allowed .
Please Help me.
06-06-2024 12:48 PM
Hi @Philospher1425,
06-06-2024 12:55 PM
That's the reason why I asked for alternative workaround, I have tried my, anyway it doesn't reduce no of jobs.
Spark can add this tiny thing like , when we write like dr.write(/filename.csv) it should write with the given filename instead of creating the folder . I know this is very silly why it has not been done till today. I just an alternative, (without file operations please), as they add up the time. If it's not possible, just leave it . I will move on.
06-06-2024 01:10 PM
@Philospher1425,
The problem is, in order to generate a single .csv file you have to coalesce your dataset to one partition and lose all parallelism that spark provides. While this might work for small datasets, such pattern will certainly lead to memory issues on larger datasets.
If you think that the pattern you described is a good and valid idea, please submit your idea to https://github.com/apache/spark or Databricks Ideas Portal.
06-12-2024 08:36 AM
Ypp, totally agree with you dbutils.fs.mv is much faster and is the best way to rename files.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now