cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

A question about Databricks Fine-grained Access Control (FGAC) cost on dedicated compute

JeremySu
New Contributor II

Hi All,

recently, while testing Fine-grained Access Control (FGAC) on dedicated compute, I came across something that seems a bit unusual, and I’d like to ask if anyone else has seen similar behavior.

I created a view with only one record, and had another user (who does not have access to the underlying table) run a simple SELECT query on it.

From the Query History, I can confirm that the query was indeed executed through FGAC.

However, when I checked the billing record in system.billing.usage, I noticed that this query only ran for 2.39 seconds, yet it consumed about 0.0811 DBU.

If we extrapolate that, it would be roughly 122 DBU per hour — which is almost equivalent to running a 2X-Large SQL Warehouse continuously.

What’s puzzling is that the time window between usage_start_time and usage_end_time is 10 minutes, even though the FGAC query itself only took 2.39 seconds to execute. 

JeremySu_0-1761878010180.png

So I’m wondering:

- Has anyone observed similar FGAC cost behavior on dedicated compute?

- Does Databricks charge in 10-minute minimum billing units for FGAC workloads?

- Is there a better way to accurately estimate the actual cost of an FGAC query?

Appreciate any insights or experiences you can share 

#FineGrainedAccessControl #CostOptimization

2 REPLIES 2

Isi
Honored Contributor III

Hello @JeremySu 

Has anyone observed similar FGAC cost behavior on dedicated compute?

Yes, I’ve seen the same behavior — it always shows 10 minutes of usage.

I believe this happens because the cluster has a 10-minute auto-termination setting, even if the query itself only runs for a few seconds.

Also, the usage timestamps are always similar — they never show exact times like 2025-10-12T12:32:13.000+00:00.

Captura de pantalla 2025-11-02 a las 18.40.02.png

Does Databricks charge in 10-minute minimum billing units for FGAC workloads?

I don’t think so. I’ve noticed that evenā€ƒ with the same ā€œ10 minutesā€ of usage time, the actual usage quantity can differ.

That probably means that, in the backend, Databricks only charges for the real compute time used, not strictly in fixed 10-minute blocks. (see previous image)

Is there a better way to accurately estimate the actual cost of an FGAC query?

It’s quite difficult to measure accurately.

In a large pipeline, Spark’s physical plan might reuse the same table multiple times for joins, dynamic partition pruning, and other operations.

So even if your code shows only one reference to a table, in the backend there could be several accesses and different usage patterns, meaning the actual compute cost could vary each time. Docs

Hope this helps šŸ™‚ ,

Isi

JeremySu
New Contributor II

@Isi Thank you for your practical experiment and for sharing your findings—it really helps everyone get a clearer view of FGAC (Fine-Grained Access Control) in Unity Catalog on Databricks. I also hope Databricks can clarify the pricing more transparently.šŸ˜Ž