That is it. Usually, people take the time it takes to run a job/query/process as their KPI.
Then you start to check which processes are taking more time, drilling down one by one. Sometimes it could be a misplaced .cache(), .collect() or display() that makes spark effectively calculate everything. You could also do the same for queries with the query profiler, checking whether there was shuffle, how many rows are being processed and whether there was disk spill. You can also check for skewness.
I really like this blog: https://www.databricks.com/discover/pages/optimize-data-workloads-guide