cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Issue with Autoloader cleanSource=MOVE Not Working as Expected

nikhilshetty4
New Contributor II
Hi everyone,
 
I've been trying to explore on cleanSource option in Autoloader to move files from the source to an archive location after they're processed and loaded into a table. I used the following simple code to test this functionality. While the code executes without any errors, the files remain in the source location and are not moved.
 
source_path = "abfss://container@storage_acc.dfs.core.windows.net/source/files"
archive_path = "abfss://container@storage_acc.dfs.core.windows.net/archive/files"
schema_location = "abfss://container@storage_acc.dfs.core.windows.net/source/autoloader/schema"
checkpoint_location = "abfss://container@storage_acc.dfs.core.windows.net/source/autoloader/checkpoint"
 
 
df = (
    spark.readStream
    .format("cloudFiles")
    .option("cloudFiles.format", "parquet")
    .option("cloudFiles.schemaLocation", schema_location)
    .option("cloudFiles.cleanSource", "MOVE")
    .option("cloudFiles.cleanSource.moveDestination", archive_path)
.option("cloudFiles.includeExistingFiles", "true")
    .load(source_path))
)
 
 
df.writeStream
    .format("delta")
    .option("checkpointLocation", checkpoint_location)
    .outputMode("append")
    .table("uc.schema.table_name")
 
When I run the query SELECT * FROM cloud_files_state(checkpoint_location), I notice that the archive_mode and move_location columns are NULL, even though Iโ€™ve explicitly set cleanSource to MOVE. I also tested the DELETE option with .option("cloudFiles.cleanSource.retentionDuration", "7 days"), but that didnโ€™t work either.
 
I came across a similar issue reported by another user using an S3 bucket as the source: Autoloader cleansource option does not take any effect 
 
Iโ€™ve tested this both in a notebook with a cluster running on Runtime 17.0 and using DLT with Runtime 16.4.
 
Could someone help me understand if Iโ€™m missing something or if there are any prerequisites or configurations needed to make this work?
 
Thanks,
Nikhil
7 REPLIES 7

szymon_dybczak
Esteemed Contributor III

Hi @nikhilshetty4 ,

I think it might be some kind of bug related to that feautre. You are another person who is saying that it doesn't work as expected

Autoloader move file to archive immediately after ... - Databricks Community - 120692

Advika
Databricks Employee
Databricks Employee

Hello @nikhilshetty4!

To confirm, do the files show a non-null commit_time in cloud_files_state? Theyโ€™ll only move to the archive location after this is set and the retention period has elapsed.

Hi @Advika 
Yes, the commit_time column contains valid timestamp values. However, the archive_time, archive_mode, and move_location columns are all showing null.

Advika
Databricks Employee
Databricks Employee

@nikhilshetty4, If the archive columns (archive_time, archive_mode, move_location) are null, it means files havenโ€™t been picked up by cleanSource for move/delete yet. Move/delete occurs after commit_time is set, the retention period has passed, and the stream is actively processing. If the stream is stopped, cleanup wonโ€™t occur, it resumes the next time the stream runs and processes data.

@Advika, Iโ€™ve tried setting the retention duration to 1 or 2 minutes and kept the stream running well beyond that time. Even when new files were processed during the stream, the data still wasnโ€™t moved to the archive location.
I've attached the screenshot of cloud_files_state output:

image.png

 

 

I did see that there is no time restriction for MOVE in autoloader documentation: 

Screenshot 2025-08-22 175206.png

 

Advika
Databricks Employee
Databricks Employee

Thanks for sharing the details, @nikhilshetty4.
I recommend raising a case with the Databricks Support team and including all the relevant details. This will help them investigate and resolve the issue more quickly.

Got it, Thanks! I'll raise a case with Databricks Support team.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now