load files filtered by last_modified in PySpark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2023 04:26 AM
Hi, community!
How do you think what is the best way to load from Azure ADLS (actually, filesystem doesn't matter) into df onli files modified after some point in time?
Is there any function like input_file_name() but for last_modified to use it in a way ?
df = spark.read.json("abfss://container@storageaccount.dfs.core.windows.net/*/*/*/*/*.json").withColumn("filename", input_file_name()).where("filename == '******'")
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2023 01:04 AM
Hi @Aleksei Zhukov , I don't think there is an inbuilt function for capturing the timestamp of source files. However if you want to perform an incremental ingestion using Databricks, there are different approaches
- One simple way would be to use Databricks Autoloader
- Other approach would be to maintain a control table to keep a track of the last load timestamp and keep comparing with the modified timestamps of your files to identify the new files and load them. This might need to be done in Python as no direct functions in Spark
- You move the processed files to an archive path so that your input path will just have new files that you need to process.
This is exactly what I have explored on my recent medium blog. Please see if this helps.
--
Databricks Auto Loader is an interesting feature that can be used to load data incrementally.
✳ It can process new data files as they arrive in the cloud object stores
✳ It can be used to ingest JSON, CSV, PARQUET, AVRO, ORC, TEXT and even Binary file formats
✳ Auto Loader can support a scale of even million files per hour. It maintains the state information at a checkpoint location in a key-value store called RocksDB. As the state is now maintained in the checkpoint, it can resume from where it was left off even in times of failure and can guarantee exactly-once semantics.
Please find my write-up on Databricks AutoLoader on Medium here. Happy for any feedbacks 🙂
🔅 Databricks Autoloader Series- Accelerating Incremental Data Ingestion: https://lnkd.in/ew3vaPmp
🔅 Databricks Auto Loader Series— The basics: https://lnkd.in/e2zanWfc
Thanks,
Vignesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2023 10:57 AM
_metadata will provide file modification timestamp. I tried on dbfs but not sure for ADLS.
https://docs.databricks.com/ingestion/file-metadata-column.html

