03-29-2023 05:14 AM
Hi community
I have an Autoloader pipeline running with following configuration. Unfortunately, it does not detect all files. (see below query definition).
The folder that needs to be read has 38.246 files that all have the same schema and structure.:
If I look at the `cloud_files_state`, I see get 4999 files. Do I do something wrong? Is this an 'initial' count that will be increased if these files are ingested?
03-29-2023 10:42 AM
Hi @Fabrice Deseyn , My understanding is that this could be because Autoloader returns a fixed no. of results per API call as explained here: https://docs.databricks.com/ingestion/auto-loader/directory-listing-mode.html#how-does-directory-lis...
03-29-2023 05:47 AM
@Fabrice Deseyn
It looks like your storage is not prepared for incremental listing. Use normal Directory Listing to get all of the files.
https://docs.databricks.com/ingestion/auto-loader/directory-listing-mode.html#date-partitioned-files
03-29-2023 06:19 AM
@Daniel Sahal
Thanks, apparently I looked over the useIncrementalListing setting... big mistake from my side...
Thanks for the second pair of eyes!
03-29-2023 06:45 AM
03-29-2023 10:42 AM
Hi @Fabrice Deseyn , My understanding is that this could be because Autoloader returns a fixed no. of results per API call as explained here: https://docs.databricks.com/ingestion/auto-loader/directory-listing-mode.html#how-does-directory-lis...
03-30-2023 12:39 AM
Hi @Fabrice Deseyn
Thank you for posting your question in our community! We are happy to assist you.
To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?
This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now