@Athul97 provided a pretty solid list of best practices. To go deeper into Budgets & Alerts, I have found a lot of good success with the Consumption and Budget feature in the Databricks Account Portal under the Usage menu. Once you embed tagging into all Databricks assets, you can really get a good picture of usage and can get a handle of where the spend is occurring. This can obviously get married up with the general cloud consumption costs for things like Storage and Networking, but gives you more granular reporting inside your workspaces.
The other area where I see opportunity is setting up some type of engineering code review and optimization process. I still see a lot of poor development practices where incorrect usage of libraries or poor data processing algorithms cause unnecessary cluster cycles. I recently audited a customer's worst performing jobs and made a number of coding suggestions that led to significant reductions in execution times. Many of the jobs that ran for hours, now complete in 30-45 minutes without any changes to the cluster configurations.