cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Optimizing Writes from Databricks to Snowflake

pvignesh92
Honored Contributor

My job after doing all the processing in Databricks layer writes the final output to Snowflake tables using df.write API and using Spark snowflake connector. I often see that even a small dataset (16 partitions and 20k rows in each partition) takes around 2 minutes to write. Is there any way, the write can be optimized?

1 ACCEPTED SOLUTION

Accepted Solutions

pvignesh92
Honored Contributor

There are few options I tried out which had given me a better performance.

  1. Caching the intermediate or final results so that while writing the dataframe computation does not repeat again.
  2. Coalesce the results into the partitions 1x or 0.5x your number of cores and also ensure that your partitions are equal to or more than 128 MB blocks so that the writes are happening efficiently.

View solution in original post

6 REPLIES 6

-werners-
Esteemed Contributor III

afaik the spark connector is already optimized. Can you try to change the partitioning of your dataset? for bulk loading larger files are better.

pvignesh92
Honored Contributor

Yes. Brought that down to 4 partitions while doing my transformations and then tried as well. On an average, it takes 2 minutes for the write. I'm not sure if that's the expected behavior with jdbc connection.

-werners-
Esteemed Contributor III

seems slow to me.

Are you sure you do not do any spark processing?

because if so: a chunck of that 2 minutes is spark transforming the data.

Vartika
Databricks Employee
Databricks Employee

Hi @Vigneshraja Palaniraj​ 

Hope all is well!

Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

pvignesh92
Honored Contributor

Thanks @Vartika Nain​ for following up. I closed this thread.

pvignesh92
Honored Contributor

There are few options I tried out which had given me a better performance.

  1. Caching the intermediate or final results so that while writing the dataframe computation does not repeat again.
  2. Coalesce the results into the partitions 1x or 0.5x your number of cores and also ensure that your partitions are equal to or more than 128 MB blocks so that the writes are happening efficiently.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group