cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
cancel
Showing results for 
Search instead for 
Did you mean: 

Can Spark History server be created in Databricks?

vladcrisan
New Contributor II

We have a Spark pipeline producing more than 3k Spark jobs. After the pipeline finishes and the cluster shuts down, only a subset (<1k) of these can be recovered from the Spark UI.

We would like to have access to the full Spark UI after the pipeline terminated and the cluster shut down. This is for performance monitoring purposes. Is it possible to deploy a Spark History Server in Databricks? If not, what is your recommended approach?

5 REPLIES 5

Debayan
Esteemed Contributor III
Esteemed Contributor III

Hi @Vlad Crisan​ , As of now, A workspace is limited to 1000 concurrent job runs. A 429 Too Many Requests response is returned when you request a run that cannot start immediately. The number of jobs a workspace can create in an hour is limited to 10000 (includes “runs submit”).

https://docs.databricks.com/workflows/jobs/jobs.html#create-run-and-manage-databricks-jobs

To increase the jobs limit in a workspace you can refer below:

https://docs.databricks.com/administration-guide/workspace/enable-increased-jobs-limit.html

vladcrisan
New Contributor II

Hi @Debayan Mukherjee​ my question is about the total number of Spark jobs on one cluster and how these can be retrieved in the Spark UI after the cluster shuts down, rather than the number of concurrent jobs. Concrete example: if a notebook running on a cluster produces (e.g. sequentially) in total 3k jobs, after the underlying cluster shuts down, I would only be able to see in the Spark UI approximately 1k jobs. My question is if there is a way to recover all jobs in the Spark UI.

Hubert-Dudek
Esteemed Contributor III

It depends on what data you need. It can be good to integrate with datadog https://www.datadoghq.com/blog/databricks-monitoring-datadog/

You can also redirect logs to Azure analitycs.

I would like to recover all information displayed in the Spark UI. Datadog is a good suggestion, but unfortunately we can't use external services in our application. Azure analytics would be an option, but I couldn't find any reference showing how to recover the full Spark UI through it.

Sandeep
Contributor III

@Vlad Crisan​ , you can use the Databricks clusters to replay the events. Please follow this kb: https://kb.databricks.com/clusters/replay-cluster-spark-events

Note: Please spin up a cluster with version 10.4 LTS.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.