@Stephanie Rivera , You can use pathGlobfilter, but you will need a separate autoloader for which type of file.
df_alert = spark.readStream.format("cloudFiles") \
.option("cloudFiles.format", "binaryFile") \
.option("pathGlobfilter", alert.csv") \
.load(<base_path>)
I think I prefer to set some copy activity first (in Azure Data Factory, for example) to group all files in the same folder on Data Lake. So, for example, alerts.csv is copied to the alert folder and renamed to date, so alerts/2022-04-08.csv (or maybe parquet instead). Then folder I would register in databricks metastore so it will be queryable like SELECT * FROM Alerts, or as Data Live Table to convert it. Then, in the copy activity in Azure Data Factory, you can set it to detect only new files and copy them.