Hi @data_turtle, That sounds like a valuable addition to Gradient!
The new metrics view for Databricks jobs will surely help engineers gain better insights into their job performance and resource usage over time. Being able to track metrics such as job cost, runtime, core*hours, workers, input data size, spill to disk, and shuffle read/write can provide valuable information for optimizing job performance and cost efficiency.
I'll check out the blog for more details. Keep up the good work!