cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Optimizing Writes from Databricks to Snowflake

pvignesh92
Honored Contributor

My job after doing all the processing in Databricks layer writes the final output to Snowflake tables using df.write API and using Spark snowflake connector. I often see that even a small dataset (16 partitions and 20k rows in each partition) takes around 2 minutes to write. Is there any way, the write can be optimized?

1 ACCEPTED SOLUTION

Accepted Solutions

pvignesh92
Honored Contributor

There are few options I tried out which had given me a better performance.

  1. Caching the intermediate or final results so that while writing the dataframe computation does not repeat again.
  2. Coalesce the results into the partitions 1x or 0.5x your number of cores and also ensure that your partitions are equal to or more than 128 MB blocks so that the writes are happening efficiently.

View solution in original post

6 REPLIES 6

-werners-
Esteemed Contributor III

afaik the spark connector is already optimized. Can you try to change the partitioning of your dataset? for bulk loading larger files are better.

pvignesh92
Honored Contributor

Yes. Brought that down to 4 partitions while doing my transformations and then tried as well. On an average, it takes 2 minutes for the write. I'm not sure if that's the expected behavior with jdbc connection.

-werners-
Esteemed Contributor III

seems slow to me.

Are you sure you do not do any spark processing?

because if so: a chunck of that 2 minutes is spark transforming the data.

Vartika
Moderator
Moderator

Hi @Vigneshraja Palaniraj​ 

Hope all is well!

Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

pvignesh92
Honored Contributor

Thanks @Vartika Nain​ for following up. I closed this thread.

pvignesh92
Honored Contributor

There are few options I tried out which had given me a better performance.

  1. Caching the intermediate or final results so that while writing the dataframe computation does not repeat again.
  2. Coalesce the results into the partitions 1x or 0.5x your number of cores and also ensure that your partitions are equal to or more than 128 MB blocks so that the writes are happening efficiently.
Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.