Hey there,
I can totally relate to your struggle with deciphering the cost breakdown for Databricks usageโit can be a bit of a maze, can't it? But fear not, I've been down that rabbit hole myself, and I might just have the solution you're looking for.
First off, it's essential to ensure that you have proper tagging set up for your Databricks resources. It seems like you're encountering issues because of missing tags. In AWS, tagging is your best friend for cost allocation. Ensure that each Databricks cluster and resource is appropriately tagged with a ClusterName or any other identifier that makes sense for your team.
Once you have your tags in place, revisit the Databricks administration console. By filtering or grouping your usage by tags, you should be able to get a more granular breakdown of your costs. You can then correlate this with the information you see in AWS Cost Explorer.
I know it sounds a bit complex, but trust me, proper tagging is the linchpin to a clearer cost breakdown. Additionally, if you're interested in exploring how successful testing projects can impact cost optimization further, you might want to check out "Andersen Financial Services: Explore How It Elevates By Successful Testing Project." They've shared some valuable insights on this topic that could complement your efforts.