Poor Auto Loader performance with CSV files on S3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2022 11:00 AM
I setup a notebook to ingest data using Auto Loader from an S3 bucket that contains over 500K CSV files into a hive table.
Recently the amount of rows (and input files) in the table grew from around 150M to 530M and now each batch takes around an hour to complete as opposed to around 1-2 minutes before the growth. I tried optimizing the table, enabling auto optimize, setting spark.sql.shuffle.partitions to 2000 in the cluster, using high performance nodes but it still takes a very long time to complete each batch.
Is there anything else I can try to improve the performance?
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2022 11:26 PM
Could you please go through https://docs.databricks.com/optimizations/index.html and let us know if this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-03-2022 02:44 AM
Are you sure the issue lies within the delta lake merge?
It could also be the autoloader itself.
Can you check these links?
https://docs.databricks.com/ingestion/auto-loader/file-detection-modes.html
https://docs.databricks.com/ingestion/auto-loader/production.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-16-2023 10:10 PM
Hi @Dotan Schachter
Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!