cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Rename the file in Databricks is so hard.How to make it simpler

Philospher1425
New Contributor II

Hi Community 

 

Actually my requirement is simple , I need to drop the files into Azure data Lake gen 2 storage from Databricks.

 

But When I use 

df.coalesce(1).write.csv("url to gen 2/stage/)

 

It's creating part .CSV file . But I need to rename to a custom name.

 

I have gone through work around using

dbutils.fs.cp()

It worked , but I have a thousands batch files to transfer like that with fustome name, So everytime it's creating a new job when zi do that .cp() operation and taking lots of time 

Compared to direct push as part.csv.

 

 

Is there any work around. And I cant use other libs like Pandas or some others .Am not allowed .

 

 

Please Help me. 

 

4 REPLIES 4

raphaelblg
Databricks Employee
Databricks Employee

Hi @Philospher1425,

 

Allow me to clarify that dbutils.fs serves as an interface to submit commands to your cloud provider storage. As such, the speed of copy operations is determined by the cloud provider and is beyond Databricks' control.
 
That being said, you may find that using dbutils.fs.mv results in a faster process, as it is a move operation rather than a copy operation. However, please note that this is not a Databricks-specific issue, but rather a characteristic of the filesystem.
 
Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

That's the reason why I asked for alternative workaround, I have tried my, anyway it doesn't reduce no of jobs. 

Spark can add this tiny thing like , when we write like dr.write(/filename.csv) it should write with the given filename instead of creating the folder . I know this is very silly why it has not been done till today. I just an alternative, (without file operations please), as they add up the time. If it's not possible, just leave it . I will move on.

@Philospher1425,

The problem is, in order to generate a single .csv file you have to coalesce your dataset to one partition and lose all parallelism that spark provides. While this might work for small datasets, such pattern will certainly lead to memory issues on larger datasets. 

If you think that the pattern you described is a good and valid idea, please submit your idea to https://github.com/apache/spark or Databricks Ideas Portal.

Best regards,

Raphael Balogo
Sr. Technical Solutions Engineer
Databricks

Ypp, totally agree with you dbutils.fs.mv is much faster and is the best way to rename files.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group