cancel
Showing results for 
Search instead for 
Did you mean: 
Knowledge Sharing Hub
Dive into a collaborative space where members like YOU can exchange knowledge, tips, and best practices. Join the conversation today and unlock a wealth of collective wisdom to enhance your experience and drive success.
cancel
Showing results for 
Search instead for 
Did you mean: 

Understand why your jobs' performances are changing over time

data_turtle
New Contributor

Hi Folks -

We released a new metrics view for databricks jobs in Gradient, which helps track and plot the metrics below over time to help engineers understand what's going on with their jobs over time.

  • Job cost (DBU + Cloud fees)
  • Job Runtime
  • Number of core*hours
  • Number of workers
  • Input data size
  • Spill to disk
  • Shuffle read/write

Check out our blog here!

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @data_turtle, That sounds like a valuable addition to Gradient!

The new metrics view for Databricks jobs will surely help engineers gain better insights into their job performance and resource usage over time. Being able to track metrics such as job cost, runtime, core*hours, workers, input data size, spill to disk, and shuffle read/write can provide valuable information for optimizing job performance and cost efficiency.

I'll check out the blog for more details. Keep up the good work!