Hello everyone,
Iām currently working on a setup where my unprocessed real-time data arrives as .json files in Azure Data Lake Storage (ADLS). Every x minutes, I use Databricks Autoloader to pick up the new data, run my ETL transformations, and store the cleaned data in Databricks tables. This works fine for moderate sources, but for certain high-volume sources that generate millions of small JSON files per day, Iām hitting the classic ātoo many small filesā issue. The overhead of scanning and listing so many files significantly impacts my processing times.
Iāve seen suggestions to periodically merge or aggregate these small files into larger ones, but it feels like thatās just an additional step that also suffers from large file-listing overhead. Iām wondering if thereās a more direct workaround or best practice to:
Reduce the overhead when reading or listing these files.
- Possibly store data in a more efficient format (Parquet/Delta) at landing time, if thatās feasible.
- Use Autoloader features (like cloudFiles.mergeSchema or cloudFiles.useNotifications) in a more optimal way for large volumes.
Has anyone successfully tackled a similar scenario? Any recommendations on how to handle a massive number of small files without incurring huge overhead on each ETL cycle would be greatly appreciated!
Thank you in advance for your insights.