Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
Showing results for 
Search instead for 
Did you mean: 

Avoiding Duplicate Ingestion with Autoloader and Migrated S3 Data

New Contributor II

Hi Team,

We recently migrated event files from our previous S3 bucket to a new one. While utilizing Autoloader for batch ingestion, we've encountered an issue where the migrated data is being processed as new events. This leads to duplicate records in our Databricks Delta table.

While we understand Autoloader utilizes RocksDB for deduplication, we'd appreciate your insights on how to effectively ensure Autoloader ignores or skips events previously ingested from the old S3 bucket.

Thank you in advance for your assistance.

See the code below:


spark.conf.set("spark.databricks.cloudFiles.checkSourceChanged", False)

    .option("cloudFiles.format", "json")
    .option("cloudFiles.schemaLocation", schema_path)
    .option("cloudFiles.schemaEvolutionMode", "addNewColumns")
    .writeStream.option("checkpointLocation", checkpoint_path)
    .option("mergeSchema", "true")




Esteemed Contributor


Changing the source means that Autoloader discovers the files as a new (technically - they are on a new location, so they are new indeed).

To overcome the issue you can use modifiedAfter property

Community Manager
Community Manager

Hey there! Thanks a bunch for being part of our awesome community! 🎉 

We love having you around and appreciate all your questions. Take a moment to check out the responses – you'll find some great info. Your input is valuable, so pick the best solution for you. And remember, if you ever need more help , we're here for you! 

Keep being awesome! 😊🚀