2 weeks ago - last edited 2 weeks ago
Hi All,
recently, while testing Fine-grained Access Control (FGAC) on dedicated compute, I came across something that seems a bit unusual, and I’d like to ask if anyone else has seen similar behavior.
I created a view with only one record, and had another user (who does not have access to the underlying table) run a simple SELECT query on it.
From the Query History, I can confirm that the query was indeed executed through FGAC.
However, when I checked the billing record in system.billing.usage, I noticed that this query only ran for 2.39 seconds, yet it consumed about 0.0811 DBU.
If we extrapolate that, it would be roughly 122 DBU per hour — which is almost equivalent to running a 2X-Large SQL Warehouse continuously.
What’s puzzling is that the time window between usage_start_time and usage_end_time is 10 minutes, even though the FGAC query itself only took 2.39 seconds to execute.
So I’m wondering:
- Has anyone observed similar FGAC cost behavior on dedicated compute?
- Does Databricks charge in 10-minute minimum billing units for FGAC workloads?
- Is there a better way to accurately estimate the actual cost of an FGAC query?
Appreciate any insights or experiences you can share
#FineGrainedAccessControl #CostOptimization
Friday
You’ve observed that Fine-grained Access Control (FGAC) queries on Databricks dedicated compute can be billed in a way that seems disproportionate to actual execution time: a very short query (2.39s) results in a 10-minute usage window and a higher-than-expected DBU charge. Here’s a breakdown of what’s known and what others have seen about this behavior:
Many users have reported that Databricks billing for dedicated compute — especially with features like FGAC or Unity Catalog — relies on minimum billing increments, not solely on actual query duration. For workloads such as Photon or Unity Catalog–enabled clusters, billing often involves rounding up usage to the nearest 10 minutes or hour, depending on the compute SKU.
Databricks typically bills SQL Warehouse compute in 10-minute minimum increments.
Even if your query only runs for a couple of seconds, the billing period can reflect the full 10-minute charge.
This is by design: Databricks allocates dedicated compute resources (including spin-up, resource assignment, and teardown time) specifically for secure FGAC execution, and the minimum charge covers this allocation.
When a user executes a query on a dedicated SQL Warehouse with FGAC, the warehouse is spun up if not already running.
Billing for the session is tracked between usage_start_time and usage_end_time, reflecting resource usage including overhead, not just query runtime.
If your compute remains active for additional queries, the minimum increment applies to the entire period of activity. A single short query, with no subsequent requests, can end up billed for the minimum window.
You can’t solely rely on query execution time for cost estimation; instead, check the usage_start_time and usage_end_time in system.billing.usage.
Add up the total DBUs consumed in the window and factor in the 10-minute increment billing model.
To get the best estimation:
Consider grouping queries into bursts within a 10-minute window rather than spacing them out.
If queries are sporadic or batched, schedule them to make the most of each minimum billing window.
It’s common to see “overbilling” for quick queries with dedicated SQL Warehouses under FGAC.
Some users work around this by keeping warehouses warm or batching queries to optimize utilization relative to billing periods.
| Feature/Behavior | Details/Community Input |
|---|---|
| Minimum billing increment | 10 minutes for SQL Warehouses (including FGAC) |
| Billing based on execution time only | No; overhead and min windows apply |
| Suggestions for accurate cost | Aggregate queries, schedule in bursts |
In summary: Yes, the behavior you saw is expected — Databricks rounds FGAC query billing to a minimum 10-minute window on dedicated compute. Actual cost estimation should account for this window, not just runtime. For lower costs, group and schedule queries to maximize each billing window’s usage.
a week ago
Hello @JeremySu
Has anyone observed similar FGAC cost behavior on dedicated compute?
Yes, I’ve seen the same behavior — it always shows 10 minutes of usage.
I believe this happens because the cluster has a 10-minute auto-termination setting, even if the query itself only runs for a few seconds.
Also, the usage timestamps are always similar — they never show exact times like 2025-10-12T12:32:13.000+00:00.
Does Databricks charge in 10-minute minimum billing units for FGAC workloads?
I don’t think so. I’ve noticed that even with the same “10 minutes” of usage time, the actual usage quantity can differ.
That probably means that, in the backend, Databricks only charges for the real compute time used, not strictly in fixed 10-minute blocks. (see previous image)
Is there a better way to accurately estimate the actual cost of an FGAC query?
It’s quite difficult to measure accurately.
In a large pipeline, Spark’s physical plan might reuse the same table multiple times for joins, dynamic partition pruning, and other operations.
So even if your code shows only one reference to a table, in the backend there could be several accesses and different usage patterns, meaning the actual compute cost could vary each time. Docs
Hope this helps 🙂 ,
Isi
a week ago
@Isi Thank you for your practical experiment and for sharing your findings—it really helps everyone get a clearer view of FGAC (Fine-Grained Access Control) in Unity Catalog on Databricks. I also hope Databricks can clarify the pricing more transparently.😎
Friday
You’ve observed that Fine-grained Access Control (FGAC) queries on Databricks dedicated compute can be billed in a way that seems disproportionate to actual execution time: a very short query (2.39s) results in a 10-minute usage window and a higher-than-expected DBU charge. Here’s a breakdown of what’s known and what others have seen about this behavior:
Many users have reported that Databricks billing for dedicated compute — especially with features like FGAC or Unity Catalog — relies on minimum billing increments, not solely on actual query duration. For workloads such as Photon or Unity Catalog–enabled clusters, billing often involves rounding up usage to the nearest 10 minutes or hour, depending on the compute SKU.
Databricks typically bills SQL Warehouse compute in 10-minute minimum increments.
Even if your query only runs for a couple of seconds, the billing period can reflect the full 10-minute charge.
This is by design: Databricks allocates dedicated compute resources (including spin-up, resource assignment, and teardown time) specifically for secure FGAC execution, and the minimum charge covers this allocation.
When a user executes a query on a dedicated SQL Warehouse with FGAC, the warehouse is spun up if not already running.
Billing for the session is tracked between usage_start_time and usage_end_time, reflecting resource usage including overhead, not just query runtime.
If your compute remains active for additional queries, the minimum increment applies to the entire period of activity. A single short query, with no subsequent requests, can end up billed for the minimum window.
You can’t solely rely on query execution time for cost estimation; instead, check the usage_start_time and usage_end_time in system.billing.usage.
Add up the total DBUs consumed in the window and factor in the 10-minute increment billing model.
To get the best estimation:
Consider grouping queries into bursts within a 10-minute window rather than spacing them out.
If queries are sporadic or batched, schedule them to make the most of each minimum billing window.
It’s common to see “overbilling” for quick queries with dedicated SQL Warehouses under FGAC.
Some users work around this by keeping warehouses warm or batching queries to optimize utilization relative to billing periods.
| Feature/Behavior | Details/Community Input |
|---|---|
| Minimum billing increment | 10 minutes for SQL Warehouses (including FGAC) |
| Billing based on execution time only | No; overhead and min windows apply |
| Suggestions for accurate cost | Aggregate queries, schedule in bursts |
In summary: Yes, the behavior you saw is expected — Databricks rounds FGAC query billing to a minimum 10-minute window on dedicated compute. Actual cost estimation should account for this window, not just runtime. For lower costs, group and schedule queries to maximize each billing window’s usage.
Sunday
Hi @mark_ott
Thank you very much for providing such a detailed and insightful explanation.
This clearly resolves our confusion as to why an FGAC query that ran for only a few seconds ultimately incurred the DBU consumption shown on the bill, due to the "10-minute minimum billing increment" rule for dedicated compute.
We sincerely hope that Databricks will, in the future, be more transparent and clear in its official Billing Documentation regarding these "minimum billing increment" rules—especially when applied to features like FGAC or others that require dedicated resource spin-up.
This would greatly help users like us to have a more accurate basis for architecture design and cost estimation, preventing unexpected charges in the future.
Thank you again for your excellent clarification.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now