To identify the reasons for a data process poor performance, we need to navigate and analyze the metrics in the Spark UI manually... However, replicating those steps for a giant group of spark applications would be very expensive in times...
Given this, my question is whether there is a system table in databricks or some strategy that allows me to visualize these metrics for a group of processes and thus be able to realize, in a massive way, how badly my company's processes are performing?