Hello @JeremySu
Has anyone observed similar FGAC cost behavior on dedicated compute?
Yes, Iāve seen the same behavior ā it always shows 10 minutes of usage.
I believe this happens because the cluster has a 10-minute auto-termination setting, even if the query itself only runs for a few seconds.
Also, the usage timestamps are always similar ā they never show exact times like 2025-10-12T12:32:13.000+00:00.

Does Databricks charge in 10-minute minimum billing units for FGAC workloads?
I donāt think so. Iāve noticed that evenā with the same ā10 minutesā of usage time, the actual usage quantity can differ.
That probably means that, in the backend, Databricks only charges for the real compute time used, not strictly in fixed 10-minute blocks. (see previous image)
Is there a better way to accurately estimate the actual cost of an FGAC query?
Itās quite difficult to measure accurately.
In a large pipeline, Sparkās physical plan might reuse the same table multiple times for joins, dynamic partition pruning, and other operations.
So even if your code shows only one reference to a table, in the backend there could be several accesses and different usage patterns, meaning the actual compute cost could vary each time. Docs
Hope this helps š ,
Isi