Hi @IM_01,
You canโt change the UI to break out those numbers, but you can get per-expectation counts from the DLT (Lakeflow) event log. Each expectation entry records passed_records and failed_records; for EXPECT rules failed_records = warned rows, and for EXPECT โฆ DROP ROW rules failed_records = dropped rows. Expectations configured with FAIL UPDATE donโt emit aggregate metrics.
Here is a sample query you can run. Just replace the DLT table name where it says my_dlt_table
WITH exploded AS (
SELECT
timestamp,
explode(
from_json(
details:flow_progress:data_quality:expectations,
'array<struct<name:string,dataset:string,passed_records:long,failed_records:long>>'
)
) AS e
FROM event_log(TABLE(my_dlt_table))
WHERE details:flow_progress:data_quality IS NOT NULL
)
SELECT
timestamp,
e.name AS expectation_name,
e.dataset,
e.passed_records,
e.failed_records
FROM exploded
ORDER BY timestamp DESC, expectation_name;
I tested it for a sample table and it returned the split. I'm guessing this is what you want to see?

You can also take a look at the documentation here for exploring data quality / expectations metrics from the event log.
Hope this helps.
If this answer resolves your question, could you mark it as โAccept as Solutionโ? That helps other users quickly find the correct fix.
Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***