DQ is interesting. There are a lot of options in this space. SODA, Great Expectations are kinda well integrate with Databricks setup.
I personally try to use dataframe abstractions for validating. We used deequ tool which is very simple to use, just pass your spark dataframe to the code, and validations happen inside your spark session (if it needs to be), otherwise we can decouple the DQ to separate classes in the package. I have spent some time working with it and created this blog post - https://datatribe.substack.com/p/deequ-an-open-source-data-quality
Its a DQ tool for data engineers I would say. And, interestingly, we can make this deequ dataframes as output delta tables to see the quality patterns. Maintainer is AWSLABS. https://github.com/awslabs/deequ
In addition, I would like to use spark-expectations opensourced by Nike - https://github.com/Nike-Inc/spark-expectations
Chanukya