Hi @Aleksei Zhukov , I don't think there is an inbuilt function for capturing the timestamp of source files. However if you want to perform an incremental ingestion using Databricks, there are different approaches
- One simple way would be to use Databricks Autoloader
- Other approach would be to maintain a control table to keep a track of the last load timestamp and keep comparing with the modified timestamps of your files to identify the new files and load them. This might need to be done in Python as no direct functions in Spark
- You move the processed files to an archive path so that your input path will just have new files that you need to process.
This is exactly what I have explored on my recent medium blog. Please see if this helps.
--
Databricks Auto Loader is an interesting feature that can be used to load data incrementally.
✳ It can process new data files as they arrive in the cloud object stores
✳ It can be used to ingest JSON, CSV, PARQUET, AVRO, ORC, TEXT and even Binary file formats
✳ Auto Loader can support a scale of even million files per hour. It maintains the state information at a checkpoint location in a key-value store called RocksDB. As the state is now maintained in the checkpoint, it can resume from where it was left off even in times of failure and can guarantee exactly-once semantics.
Please find my write-up on Databricks AutoLoader on Medium here. Happy for any feedbacks 🙂
🔅 Databricks Autoloader Series- Accelerating Incremental Data Ingestion: https://lnkd.in/ew3vaPmp
🔅 Databricks Auto Loader Series— The basics: https://lnkd.in/e2zanWfc
Thanks,
Vignesh