I use bad records while reading a csv as follows:df = spark.read.format("csv")
.schema(schema)
.option("badRecordsPath", bad_records_path) Since bad records are not written immediately, I want to know how can trigger the write...
I found the problem why the code didn't trigger the bad records write. I did empty the folder for bad records. After fixing that, it works. Thanks for the help Isi data_frame.write.format("delta").option("optimizeWrite", "true").mode(
"o...
data_frame.write.format("delta").option("optimizeWrite", "true").mode(
"overwrite"
).saveAsTable(table_name)doesn't trigger a bad record write. How is that possible?
It helps :). Thank you.I have two questions to clarify and possibly optimize.1) Since I write the data frame to a table later, I'm wondering if there is again a full evaluation of the DataFrame. Consequently, there are two full evaluations, one trigg...