cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

S3 write to bucket - best performance tips

720677
New Contributor III

I'm writing big dataframes into deltas in s3 buckets.

df.write\

.format("delta")\

.mode("append")\

.partitionBy(partitionColumns)\

.option("mergeSchema", "true")\

.save(target_path)

What are the best tips to improve performance of the write as it takes several minutes today to finish writing to s3.

using latest versions of clusters and Spark 3.4.0, python.

  1. What spark config parameters can improve the write? Should i try "spark.hadoop.fs.s3a.bucket.all.committer.magic.enabled " ? how?
  2. Should i try all kind of parameters like "spark.hadoop.fs.s3a.impl.disable.cache" ?
  3. The dataframe is only partition by one column. Should i partition by more to parallelized the write? or it will not impact that?
  4. what else can i check?

2 REPLIES 2

Anonymous
Not applicable

@Pablo (Ariel)​ :

There are several ways to improve the performance of writing data to S3 using Spark. Here are some tips and recommendations:

  1. Increase the size of the write buffer: By default, Spark writes data in 1 MB batches. You can increase the size of the write buffer to reduce the number of requests made to S3 and improve performance. You can set the buffer size using the configuration parameter spark.databricks.delta.logFileCommitBufferSize.
  2. Use a faster S3 endpoint: If you are using a S3 bucket in a different region than your Databricks workspace, you can use a faster endpoint to improve write performance. You can set the fs.s3a.endpoint configuration parameter to the URL of the endpoint.
  3. Use S3Guard: S3Guard is a feature of Hadoop that provides a consistent view of S3 data even when multiple writers are writing to the same bucket. You can enable S3Guard by setting the fs.s3a.metadatastore.impl configuration parameter to org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore
  4. Use instance storage: If your Databricks cluster has instance storage, you can use it to write data to local disk before copying it to S3. This can improve performance by reducing network traffic. You can set the spark.databricks.delta.logStore. configuration parameter to local
  5. Parallelize the write: Partitioning the DataFrame by more than one column can help parallelize the write and improve performance. However, the number of partitions should not exceed the number of available cores in your cluster. You can set the number of partitions using the repartition or coalesce methods.
  6. Optimize your data: If your data has a lot of small files, you can use the spark.sql.files.maxRecordsPerFile configuration parameter to control the size of the output files.
  7. Optimize your storage: You can optimize the storage format of your data to improve write performance. For example, using a columnar storage format like Parquet can reduce the amount of data that needs to be written to S3.

Regarding the specific configuration parameters you mentioned:

  • spark.hadoop.fs.s3a.bucket.all.committer.magic.enabled: This parameter is used to enable the magic committer for all S3 buckets. The magic committer can improve write performance by reducing the number of S3 requests made during a write operation. However, this feature is only available for certain file systems and may not be compatible with Delta Lake.
  • spark.hadoop.fs.s3a.impl.disable.cache: This parameter is used to disable the S3A client cache. Disabling the cache can improve write performance by reducing the amount of memory used by the S3A client. However, this can also increase the number of requests made to S3.

Overall, it's recommended to experiment with different configuration parameters and settings to find the best combination for your specific use case.

720677
New Contributor III

Thank you for the answer - i will start checking the changes.

I couldn't find the logFileCommitBufferSize parameter in the databricks configuration.

Can you give me a link?

What should be the value as example:

spark.databricks.delta.logFileCommitBufferSize 50mb

or

spark.databricks.delta.logFileCommitBufferSize 50000

?

Thank you

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group