cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Autoloader - understanding missing file after schema update.

Larrio
New Contributor III

Hello,

Concerning Autoloader (based on https://docs.databricks.com/ingestion/auto-loader/schema.html), so far what I understand is when it detects a schema update, the stream fails and I have to rerun it to make it works, it's ok.

But once I rerun it, it look for missing files, hence the following exception

Caused by: com.databricks.sql.io.FileReadException: Error while reading file s3://some-bucket/path/to/data/1999/10/20/***.parquet. [CLOUD_FILE_SOURCE_FILE_NOT_FOUND] A file notification was received for file: s3://some-bucket/path/to/data/1999/10/20/***.parquet but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration spark.sql.files.ignoreMissingFiles to true.

It works well once I set ignoreMissingFiles to True.

I understand it fails the first time it detects a change, but why does it looks for deleted files the second time autoloader runs ?

What are the impact ? Do I lose data ?

Thanks !

6 REPLIES 6

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi, I found an interesting read on the same error received: https://www.waitingforcode.com/apache-spark-sql/ignoring-files-issues-apache-spark-sql/read , let us know if this helps.

Also please tag @Debayan​ with your next response which will notify me, Thank you!

Larrio
New Contributor III

Hello @Debayan Mukherjee​ 

Thanks for your answer, I've already seen this read and it's good to know how a missing file is handle.

But my question here is more about the Autoloader, why do we have missing files in the first place ?

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi,

Could you please confirm your cluster configurations? Also, the spark conf?

Larrio
New Contributor III

Hi @Debayan Mukherjee​ 

I don't have a custom spark conf (except the following line in order to make it ignore the missing file)

spark.sql.files.ignoreMissingFiles true

The cluster conf

Policy: Unrestricted
Multi node
Access mode: Single user
Databricks runtime version: 11.3 LTS (Scala 2.12, Spark 3.3.0)
Worker type: r5d.xlarge
Workers: 2 (64 GB Memory 8 cores)
Driver type: Same as worker (32 GB Memory, 4 Cores)

I'm using Unity Catalog also if that helps.

Anonymous
Not applicable

Hi @Lucien Arrio​ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

Larrio
New Contributor III

Hello, I still don't have an answer on why do we have missing files, I understood how Spark handle it but I don't know why do we have missing files in the first place.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.