cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

need to ingest millions of csv files from aws s3

Kumarashokjmu
New Contributor II

I have a need to ingest millions of csv files from aws s3 bucket. I am facing issue with aws s3 throttling issue and besides notebook process is running for 8 hours plus and sometimes failing. When looking at cluster performance, it is utilized 60%.

I need suggestions on avoiding throttling by aws and what should be source filesize if I have to combine small files to bigger for processing, speeding up ingestion and any other spark parameter needs tuning.

Thanks in advance.

Ash

4 REPLIES 4

Thank you so much Kaniz, I Really appreciate your response with detail reply on each topic. I will post more with time to get help from you.

Ashok

 

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Kumarashokjmu,

I would recommend to use Databricks auto loader to ingest your CSV files incrementally. You can find examples and more details here https://docs.databricks.com/en/ingestion/auto-loader/index.html#what-is-auto-loader

jose_gonzalez
Databricks Employee
Databricks Employee

Hi @Kumarashokjmu,

Just a friendly follow-up. Did you have time to test auto loader? do you have any follow-up questions? Please let us know

kulkpd
Contributor

If you want to load all the data at once use autoloader or DLT pipeline with directory listing if files are lexically ordered. 

OR
If you want to perform incremental load, divide the load into two job like historic data load vs live data load:
Live data:
Use autoloader or delta live pipeline using fileNotification to load the data into Delta table. File Notification is scalable and recommended solution from Databricks. 
https://docs.databricks.com/en/ingestion/auto-loader/options.html#directory-listing-options

Historic Load: 
Use autoloader job to load all the data. if files are not lexically ordered then try using s3 inventory option to divide the workload into micro-batches. Using this approach multiple batches can be executed in parallel.

Handle S3 throttling issues:
if you can facing issue with s3 throttling. Try limit maxFilesPerTrigger to 10k-15k. 
Increase spark.network.timeout configuration in spark init block.

Let us know if you need more information

 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group