cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Is it possible to use Autoloader with a daily update file structure?

StephanieAlba
Databricks Employee
Databricks Employee

We get new files from a third-p@rty each day. The files could be the same or different. However, each day all csv files arrive in the same dated folder. Is it possible to use autoloader on this structure?The foldersIn the foldersWe want each csv file to be a table that gets updated each day. Like account table and accounting table....

1 ACCEPTED SOLUTION

Accepted Solutions

Hubert-Dudek
Esteemed Contributor III

@Stephanie Rivera​ , You can use pathGlobfilter, but you will need a separate autoloader for which type of file.

df_alert = spark.readStream.format("cloudFiles") \

.option("cloudFiles.format", "binaryFile") \

.option("pathGlobfilter", alert.csv") \

.load(<base_path>)

I think I prefer to set some copy activity first (in Azure Data Factory, for example) to group all files in the same folder on Data Lake. So, for example, alerts.csv is copied to the alert folder and renamed to date, so alerts/2022-04-08.csv (or maybe parquet instead). Then folder I would register in databricks metastore so it will be queryable like SELECT * FROM Alerts, or as Data Live Table to convert it. Then, in the copy activity in Azure Data Factory, you can set it to detect only new files and copy them.

View solution in original post

1 REPLY 1

Hubert-Dudek
Esteemed Contributor III

@Stephanie Rivera​ , You can use pathGlobfilter, but you will need a separate autoloader for which type of file.

df_alert = spark.readStream.format("cloudFiles") \

.option("cloudFiles.format", "binaryFile") \

.option("pathGlobfilter", alert.csv") \

.load(<base_path>)

I think I prefer to set some copy activity first (in Azure Data Factory, for example) to group all files in the same folder on Data Lake. So, for example, alerts.csv is copied to the alert folder and renamed to date, so alerts/2022-04-08.csv (or maybe parquet instead). Then folder I would register in databricks metastore so it will be queryable like SELECT * FROM Alerts, or as Data Live Table to convert it. Then, in the copy activity in Azure Data Factory, you can set it to detect only new files and copy them.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group