We have a Spark pipeline producing more than 3k Spark jobs. After the pipeline finishes and the cluster shuts down, only a subset (<1k) of these can be recovered from the Spark UI.
We would like to have access to the full Spark UI after the pipeline terminated and the cluster shut down. This is for performance monitoring purposes. Is it possible to deploy a Spark History Server in Databricks? If not, what is your recommended approach?