โ03-07-2023 02:06 AM
Hello,
Concerning Autoloader (based on https://docs.databricks.com/ingestion/auto-loader/schema.html), so far what I understand is when it detects a schema update, the stream fails and I have to rerun it to make it works, it's ok.
But once I rerun it, it look for missing files, hence the following exception
Caused by: com.databricks.sql.io.FileReadException: Error while reading file s3://some-bucket/path/to/data/1999/10/20/***.parquet. [CLOUD_FILE_SOURCE_FILE_NOT_FOUND] A file notification was received for file: s3://some-bucket/path/to/data/1999/10/20/***.parquet but it does not exist anymore. Please ensure that files are not deleted before they are processed. To continue your stream, you can set the Spark SQL configuration spark.sql.files.ignoreMissingFiles to true.
It works well once I set ignoreMissingFiles to True.
I understand it fails the first time it detects a change, but why does it looks for deleted files the second time autoloader runs ?
What are the impact ? Do I lose data ?
Thanks !
โ03-08-2023 10:28 PM
Hi, I found an interesting read on the same error received: https://www.waitingforcode.com/apache-spark-sql/ignoring-files-issues-apache-spark-sql/read , let us know if this helps.
Also please tag @Debayanโ with your next response which will notify me, Thank you!
โ03-09-2023 01:02 AM
Hello @Debayan Mukherjeeโ
Thanks for your answer, I've already seen this read and it's good to know how a missing file is handle.
But my question here is more about the Autoloader, why do we have missing files in the first place ?
โ03-12-2023 10:43 PM
Hi,
Could you please confirm your cluster configurations? Also, the spark conf?
โ03-17-2023 08:54 AM
Hi @Debayan Mukherjeeโ
I don't have a custom spark conf (except the following line in order to make it ignore the missing file)
spark.sql.files.ignoreMissingFiles true
The cluster conf
Policy: Unrestricted
Multi node
Access mode: Single user
Databricks runtime version: 11.3 LTS (Scala 2.12, Spark 3.3.0)
Worker type: r5d.xlarge
Workers: 2 (64 GB Memory 8 cores)
Driver type: Same as worker (32 GB Memory, 4 Cores)
I'm using Unity Catalog also if that helps.
โ03-31-2023 05:47 PM
Hi @Lucien Arrioโ
Hope all is well! Just wanted to check in if you were able to resolve your issue and would you be happy to share the solution or mark an answer as best? Else please let us know if you need more help.
We'd love to hear from you.
Thanks!
โ04-06-2023 01:42 AM
Hello, I still don't have an answer on why do we have missing files, I understood how Spark handle it but I don't know why do we have missing files in the first place.
Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโt want to miss the chance to attend and share knowledge.
If there isnโt a group near you, start one and help create a community that brings people together.
Request a New Group