Yes I did. This time in databricks connect and even in databricks notebook, the behaviour is the same. Small note, I have set the setting to false, as I want the code to fail if any file cannot be loaded.
Following code returns false for the check and ends up with error as expected.
print(spark.conf.get("spark.sql.files.ignoreCorruptFiles"))
paths = ["path_to_corrupted_file"]
df = spark.read(*paths)
But following code returns false for the check, but df is created succesfully with one file loaded. Expected behaviour is to end up also with error. But it seems that there is still fault tolerance.
print(spark.conf.get("spark.sql.files.ignoreCorruptFiles"))
paths = ["path_to_corrupted_file", "path_to_normal_file"]
df = spark.read(*paths)
It is probable, that I do not understand the behaviour of the setting correctly, as I expect it to ends up with error too.