Transaction Log Failed Integrity Checks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-11-2025 11:40 AM
I have started to receive the following error message - that the transaction log has failed integrity checks - when attempting to optimize and run compaction on a table. It also occurs when I attempt to alter this table.
This blocks my pipeline from running. What is strange is that I can run queries against the table without issue and all data is intact, but I cannot update the table. Other community messages have noted this error in the past, but the resolution involved simply updating the following spark setting and turning off the integrity check:
`spark.conf.set("spark.databricks.delta.state.corruptionIsFatal", False)`
I am concerned that just setting corruption to Fatal does not address the underlying problem down the line. I have found the only method to actually alleviate the issue is to copy the table to a new table and delete the original table. This maintains all of our data, but we lose the transaction history (basically resetting the transaction log to zero). I would prefer not to do this though so that we can still time travel. Does anyone have any advice on what might be causing this issue or how it can be resolved?

