08-22-2025 03:01 AM
08-22-2025 03:21 AM
Hi @nikhilshetty4 ,
I think it might be some kind of bug related to that feautre. You are another person who is saying that it doesn't work as expected
Autoloader move file to archive immediately after ... - Databricks Community - 120692
08-22-2025 04:19 AM
Hello @nikhilshetty4!
To confirm, do the files show a non-null commit_time in cloud_files_state? They’ll only move to the archive location after this is set and the retention period has elapsed.
08-22-2025 04:33 AM - edited 08-22-2025 04:34 AM
Hi @Advika
Yes, the commit_time column contains valid timestamp values. However, the archive_time, archive_mode, and move_location columns are all showing null.
08-22-2025 05:08 AM
@nikhilshetty4, If the archive columns (archive_time, archive_mode, move_location) are null, it means files haven’t been picked up by cleanSource for move/delete yet. Move/delete occurs after commit_time is set, the retention period has passed, and the stream is actively processing. If the stream is stopped, cleanup won’t occur, it resumes the next time the stream runs and processes data.
08-22-2025 05:24 AM - edited 08-22-2025 05:45 AM
@Advika, I’ve tried setting the retention duration to 1 or 2 minutes and kept the stream running well beyond that time. Even when new files were processed during the stream, the data still wasn’t moved to the archive location.
I've attached the screenshot of cloud_files_state output:
I did see that there is no time restriction for MOVE in autoloader documentation:
08-22-2025 06:05 AM
Thanks for sharing the details, @nikhilshetty4.
I recommend raising a case with the Databricks Support team and including all the relevant details. This will help them investigate and resolve the issue more quickly.
08-22-2025 06:32 AM
Got it, Thanks! I'll raise a case with Databricks Support team.
Friday
any update on this ?
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now