Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Currently I have files landing in a storage account. They are all located in subfolders of a common directory. Some subdirectories may contain files, others may not. Each file name is unique and corresponds to a unique table as well. No two files are updating the same table. Is it possible to use autoloader and cloudfiles in a such a way that it can be given the path to a main directory as input, propagate through it, and process the data per file?
Read all the files using auto loader and add an additional column as follows:
.withColumn("filePath",input_file_name())
Now that you've file name, you can split the data frame as per your requirement and ingest data into different tables.
Connect with Databricks Users in Your Area
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.