Hi
i'm using an autoloader with Azure Databricks:
df = (spark.readStream.format("cloudFiles")
.options(**cloudfile)
.load("abfss://dev@std******.dfs.core.windows.net/**/*****))
at my target checkpointLocation folder there are some files and subdirs created as a result.
It will detect and process new files which is OK.
Also when I restart my cluster it will again process only the new files, which is OK.
But if I want to restart the autoloader in order to re-process all files from the source folder again I could not find anything how to do so.
Can someone please give me a hint.