Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Currently I have files landing in a storage account. They are all located in subfolders of a common directory. Some subdirectories may contain files, others may not. Each file name is unique and corresponds to a unique table as well. No two files are updating the same table. Is it possible to use autoloader and cloudfiles in a such a way that it can be given the path to a main directory as input, propagate through it, and process the data per file?