I have implemented Databricks Autoloader and found that every time i executes the code, it is still reading all old existing files + new files. As per the concept of Autoloader, it should read and process only new files. Below is the code. Please help me to understand what might have went wrong
df = (
spark.readStream.format("cloudFiles")
.option("cloudFiles.format","csv")
.option("cloudFiles.schemaLocation",<ADLS_PATH>)
.option("cloudEvolutionMode","rescue")
.option("header",True)
.load("abfss://container2@storageaccount1.dfs.core.windows.net/autoloader/input1/*/")
.writeStream
.option("checkpoint_location","abfss://container2@storageaccount1.dfs.core.windows.net/autoloader/checkpoint1")
.option("mergeSchema",True)
.trigger(availableNow=True)
.toTable("uniform_catalog1.autoloader2.table1")
)