Hey @jaimeperry12345
I will need more information to direct you in the right direction:
- Confirm the behavior: Double-check that your Delta table is indeed reading 8-day-old files randomly. Provide any logs or error messages you have regarding this.
- Expected behavior: Explain how the table should be functioning ideally. Are you expecting it to pick up the latest files only?
Looking at the current details you mentioned please check:
- Check File timestamps: Verify that the file timestamps on Azure Blob Storage accurately reflect the actual creation time. Inconsistent timestamps can mislead the Delta Lake autoloader.
- Review Autoloader Configuration: Ensure your Delta Lake autoloader configuration points to the correct directory and includes parameters like minPartitions and partitionBy appropriately.
- Spark Configuration: Make sure your Spark session configuration doesn't have any settings that might interfere with reading the latest files (e.g., caching or checkpointing).
- Cluster Termination: If you're using a managed Databricks cluster, ensure it's not automatically terminating and restarting, as this can sometimes cause the autoloader to pick up older files.
- Logs and Diagnostics: Analyze the Delta Lake logs and Spark driver logs for any clues about what might be causing the issue. There might be specific error messages or warnings related to the autoloader.
Follow ups are appreciated!
Leave a like if this helps! Kudos,
Palash