I would like to confirm something. We are using Azure Databricks and Azure BLOB storage.
We have a `landing` container that has directories such as `request_type_a` and `request_type_b`, each receiving files that trigger different jobs in Databricks. We are starting to consider what happens when these directories get to 10,000 BLOBs.
We are thinking about moving older BLOBs out of these directories into another archive directory that is not monitored by Databricks, creating a structure like:
landing/request_type_a/file.json
landing/request_type_a_archive/old_file.json
landing/request_type_b/file.json
landing/request_type_b_archive/old_file.json
Is this a reasonable method of ensuring we do not exceed the 10,000 file limit, or do you foresee that this would cause issues?
Additionally, do you know if changing older files to use the archive tier would result in these files not being counted in the 10,000 limit?