Hi Community
Actually my requirement is simple , I need to drop the files into Azure data Lake gen 2 storage from Databricks.
But When I use
df.coalesce(1).write.csv("url to gen 2/stage/)
It's creating part .CSV file . But I need to rename to a custom name.
I have gone through work around using
dbutils.fs.cp()
It worked , but I have a thousands batch files to transfer like that with fustome name, So everytime it's creating a new job when zi do that .cp() operation and taking lots of time
Compared to direct push as part.csv.
Is there any work around. And I cant use other libs like Pandas or some others .Am not allowed .
Please Help me.