If you've ever built a dashboard where you needed to track the same data across two different date dimensions, you know the frustration. You get the first chart working. You add the second. Then you realise cross-filtering just stopped working.
I recently came across a question on the Databricks community forum that captures this perfectly. A user wanted to build a cumulative support ticket chart with two lines. One for tickets opened over time and another for tickets resolved. Simple enough in concept. But the moment they split the data into two datasets to get the two date columns on the X-axis, dashboard cross-filtering broke. They asked whether it was possible to build a star schema inside a Databricks dashboard or dynamically switch the X-axis.
It's a good question, and the answer involves a technique I use. The trick is not to fight the dashboard. Instead, reshape the data so the dashboard doesn't need to juggle multiple datasets. Views give you that reshape. And if you want to go further, Metric Views let you define reusable, governed metrics that work across dashboards, Genie, and SQL.
I would like to walk through both approaches using a support ticket scenario.
The setup
We have a support_tickets table. Each row is a ticket with a creation date, a resolution date (NULL if still open), a priority, a category, and an assignee.
CREATE OR REPLACE TABLE support_tickets (
ticket_id STRING,
created_date DATE,
resolved_date DATE,
priority STRING,
category STRING,
assignee STRING,
status STRING
);
Fifty tickets, spread across July to November 2025. Forty have been resolved. Ten are still open.
The first 10 rows of our support_tickets table. Note the NULL resolved_date for open tickets.
The problem: two dates, one chart
The goal is to plot two cumulative lines on the same chart: tickets opened over time and tickets resolved over time. The gap between the two lines at any point tells you the open backlog.
The first instinct is to create two datasets:
Dataset 1: Cumulative tickets opened
SELECT
created_date,
COUNT(*) AS tickets_opened,
SUM(COUNT(*)) OVER (ORDER BY created_date) AS cumulative_opened
FROM support_tickets
GROUP BY created_date
ORDER BY created_date;
Dataset 2: Cumulative tickets resolved
SELECT
resolved_date,
COUNT(*) AS tickets_resolved,
SUM(COUNT(*)) OVER (ORDER BY resolved_date) AS cumulative_resolved
FROM support_tickets
WHERE resolved_date IS NOT NULL
GROUP BY resolved_date
ORDER BY resolved_date;
Two separate datasets: cumulative opened (left) and cumulative resolved (right).
Both queries work individually. But put them on the same dashboard and add a filter for priority or category, and cross-filtering breaks. The dashboard treats them as independent datasets with no relationship. Filter one and the other doesn't respond.
This is the exact problem the community user ran into.
Solution 1: Reshape with a SQL view
The core insight is simple: if both date dimensions lived in the same column, we wouldn't need two datasets. We can get there by unpivoting the data. Instead of one row per ticket, we create two rows: one for the "opened" event and one for the "resolved" event.
CREATE OR REPLACE VIEW ticket_events AS
SELECT
ticket_id,
created_date AS event_date,
'Opened' AS event_type,
priority,
category,
assignee,
status
FROM support_tickets
UNION ALL
SELECT
ticket_id,
resolved_date AS event_date,
'Resolved' AS event_type,
priority,
category,
assignee,
status
FROM support_tickets
WHERE resolved_date IS NOT NULL;
The view unpivots created_date and resolved_date into a single event_date column.
Now we have a single dataset where event_date is the X-axis and event_type tells us which line we're on. The cumulative chart becomes one query:
SELECT
event_date,
event_type,
COUNT(*) AS daily_count,
SUM(COUNT(*)) OVER (
PARTITION BY event_type
ORDER BY event_date
) AS cumulative_count
FROM ticket_events
GROUP BY event_date, event_type
ORDER BY event_date, event_type;
Both cumulative lines in a single dataset. 90 rows: 50 opened events + 40 resolved events.
In the dashboard, you set event_date as the X-axis, cumulative_count as the Y-axis, and color by event_type. Two lines, one dataset.
And because every row still carries priority, category, and assignee, cross-filtering works. Add a filter for priority = 'High' and both lines update together:
SELECT
event_date,
event_type,
COUNT(*) AS daily_count,
SUM(COUNT(*)) OVER (
PARTITION BY event_type
ORDER BY event_date
) AS cumulative_count
FROM ticket_events
WHERE priority = 'High'
GROUP BY event_date, event_type
ORDER BY event_date, event_type;
Filtered to High priority only. Both lines respond to the same filter because they share a dataset.
No broken cross-filters. No need for a star schema inside the dashboard. The view did the work.
Solution 2: Metric views for reusable, governed metrics
The view solves the immediate dashboard problem. But what happens when someone else needs the same "tickets opened" and "tickets resolved" metrics in a different dashboard? Or when a colleague asks Genie "how many high-priority tickets were opened last month?"
If the logic only lives in a SQL view and a dashboard query, it gets copied. And copied logic drifts. This is where Metric Views come in.
A Metric View is a Unity Catalog object that defines dimensions and measures separately from any specific query. You define the metric once, and it can be consumed in dashboards, Genie, alerts, and direct SQL. It's the Databricks equivalent of Power BI's semantic model with DAX measures, except it lives in the catalog, not inside a BI tool.
Here's the metric view for our ticket events:
CREATE OR REPLACE VIEW ticket_metrics
WITH METRICS
LANGUAGE YAML
AS $$
version: 1.1
comment: "Support ticket metrics"
source: ticket_events
dimensions:
- name: Event Date
expr: event_date
- name: Event Type
expr: event_type
- name: Priority
expr: priority
- name: Category
expr: category
- name: Assignee
expr: assignee
measures:
- name: Ticket Count
expr: COUNT(*)
$$
The metric view definition: five dimensions and one measure, all in YAML.
Querying it looks like regular SQL, with one difference: measures are wrapped in the MEASURE() function.
SELECT
`Event Date`,
`Event Type`,
MEASURE(`Ticket Count`) AS tickets,
SUM(MEASURE(`Ticket Count`)) OVER (
PARTITION BY `Event Type`
ORDER BY `Event Date`
) AS cumulative_tickets
FROM ticket_metrics
WHERE `Event Date` IS NOT NULL
GROUP BY `Event Date`, `Event Type`
ORDER BY `Event Date`, `Event Type`;
Same cumulative result, now powered by the metric view and the MEASURE() function.
The results are identical to the view query. But the metric definition now lives in the catalog. Anyone with access can query it. Genie can use it. Another dashboard team can build on it without rewriting the logic.
And because dimensions are independent of measures, consumers can slice the data any way they want at query time:
-- By category
SELECT
`Category`,
`Event Type`,
MEASURE(`Ticket Count`) AS tickets
FROM ticket_metrics
WHERE `Event Date` IS NOT NULL
GROUP BY `Category`, `Event Type`
ORDER BY `Category`, `Event Type`;
Same metric, sliced by category. Billing has the most open tickets.
-- By assignee
SELECT
`Assignee`,
`Event Type`,
MEASURE(`Ticket Count`) AS tickets
FROM ticket_metrics
WHERE `Event Date` IS NOT NULL
GROUP BY `Assignee`, `Event Type`
ORDER BY `Assignee`, `Event Type`;
Same metric, sliced by assignee. No logic duplication across any of these queries.
Same metric, different cuts. No logic duplication.
How this maps to Power BI
If you're coming from Power BI, here's the translation:
| Power BI concept |
Databricks equivalent |
| Star schema with date dimension |
SQL view that reshapes data (unpivot) |
| DAX measure in semantic model |
Metric view measure in Unity Catalog |
| Measure reuse across reports |
Metric view queryable from any dashboard, Genie, or SQL |
| Cross-filtering across visuals |
Single dataset via view, all filter columns preserved |
The key difference: in Power BI, your measures live inside the .pbix file or a published dataset. In Databricks, they live in Unity Catalog alongside your tables and views. They're governed, discoverable, and not locked inside any single tool.
Wrapping up
The pattern here is broadly applicable. Any time you need to visualize the same fact across two different date dimensions (or any two dimension columns) in a single chart with working cross-filters, the approach is:
- Unpivot the dimensions into a single column using a
UNION ALL view
- Use the event type discriminator to split your chart lines or series
- Optionally, promote the logic into a Metric View so the metrics are reusable and governed
This post was inspired by a question on the Databricks community forum about building cumulative charts with cross-filtering. If you've run into a similar problem, I hope this gives you a clear path forward.
If you want to go deeper on metric views, the official documentation covers joins, window measures, and materialization.
Questions or a different use case where this pattern helped? Drop a comment below.