07-23-2024 07:47 PM
Hi,
I am currently creating an AutoLoader in databricks and will be using ADF as an orchestrator.
I am quite confused how this will handle my data so please clarify if I misunderstood it.
First, I will run my ADF pipeline which includes an activity to call my AutoLoader notebook. Will it work like below or will it just process all the files in the folder once I run the ADF pipeline?
***I'm using option("includeExistingFiles", False) on my readStream
07-23-2024 10:50 PM
Auto loader will process files incrementally. Let's say you have a files in existing directory called /input_files
First time you run autoloader, it will read all files in that directory (unless you set an option includeExsistingFiles to false, like you did) and save information about what files has been read to checkpoint location.
Next run will only load new files, because auto loader knows what has been loaded previously thanks to checkpoint
07-23-2024 10:50 PM
Auto loader will process files incrementally. Let's say you have a files in existing directory called /input_files
First time you run autoloader, it will read all files in that directory (unless you set an option includeExsistingFiles to false, like you did) and save information about what files has been read to checkpoint location.
Next run will only load new files, because auto loader knows what has been loaded previously thanks to checkpoint
07-24-2024 06:17 AM
I see. So even though my stream stops it can still identify the files that was processed using the info in checkpoint location?
07-24-2024 06:18 AM
Exactly, you got it right 😉
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now