Hi,
I have a workflow created where there are 5 notebooks in it. One of the notebooks is failing with below error. I have tried refreshing the table. Still facing the same issue. When I try to run the notebook manually, it works fine. Can someone please help me to find the permanent solution for this.
Job aborted due to stage failure: Task 736 in stage 92.0 failed 4 times, most recent failure: Lost task 736.3 in stage 92.0 (TID 3715) (executor 18): com.databricks.sql.io.FileReadException: Error while reading file <path>. [DEFAULT_FILE_NOT_FOUND] It is possible the underlying files have been updated. You can explicitly invalidate the cache in Spark by running 'REFRESH TABLE tableName' command in SQL or by recreating the Dataset/DataFrame involved. If disk cache is stale or the underlying files have been removed, you can invalidate disk cache manually by restarting the cluster.
Hubert_Dudek1 werners1 @Prabakar @Debayan daniel_sahal