cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Autoloader - understanding missing file after schema update.

Larrio
New Contributor III

Hello,

Concerning Autoloader (based on https://docs.databricks.com/ingestion/auto-loader/schema.html), so far what I understand is when it detects a schema update, the stream fails and I have to rerun it to make it works, it's ok.

But once I rerun it, it look for missing files, hence the following exception

Caused by: com.databricks.sql.io.FileReadException: Error while reading file s3://some-bucket/path/to/data/1999/10/20/***.parquet. [CLOUD_FILE_SOURCE_FILE_NOT_FOUND] A file notification was received for file: s3://some-bucket/path/to/data/1999/10/20/***.parquet but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration spark.sql.files.ignoreMissingFiles to true.

It works well once I set ignoreMissingFiles to True.

I understand it fails the first time it detects a change, but why does it looks for deleted files the second time autoloader runs ?

What are the impact ? Do I lose data ?

Thanks !

6 REPLIES 6

Debayan
Databricks Employee
Databricks Employee

Hi, I found an interesting read on the same error received: https://www.waitingforcode.com/apache-spark-sql/ignoring-files-issues-apache-spark-sql/read , let us know if this helps.

Also please tag @Debayanโ€‹ with your next response which will notify me, Thank you!

Larrio
New Contributor III

Hello @Debayan Mukherjeeโ€‹ 

Thanks for your answer, I've already seen this read and it's good to know how a missing file is handle.

But my question here is more about the Autoloader, why do we have missing files in the first place ?

Debayan
Databricks Employee
Databricks Employee

Hi,

Could you please confirm your cluster configurations? Also, the spark conf?

Larrio
New Contributor III

Hi @Debayan Mukherjeeโ€‹ 

I don't have a custom spark conf (except the following line in order to make it ignore the missing file)

spark.sql.files.ignoreMissingFiles true

The cluster conf

Policy: Unrestricted
Multi node
Access mode: Single user
Databricks runtime version: 11.3 LTS (Scala 2.12, Spark 3.3.0)
Worker type: r5d.xlarge
Workers: 2 (64 GB Memory 8 cores)
Driver type: Same as worker (32 GB Memory, 4 Cores)

I'm using Unity Catalog also if that helps.

Anonymous
Not applicable

Hi @Lucien Arrioโ€‹ 

Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help. 

We'd love to hear from you.

Thanks!

Larrio
New Contributor III

Hello, I still don't have an answer on why do we have missing files, I understood how Spark handle it but I don't know why do we have missing files in the first place.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group