cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Can Spark History server be created in Databricks?

vladcrisan
New Contributor II

We have a Spark pipeline producing more than 3k Spark jobs. After the pipeline finishes and the cluster shuts down, only a subset (<1k) of these can be recovered from the Spark UI.

We would like to have access to the full Spark UI after the pipeline terminated and the cluster shut down. This is for performance monitoring purposes. Is it possible to deploy a Spark History Server in Databricks? If not, what is your recommended approach?

5 REPLIES 5

Debayan
Databricks Employee
Databricks Employee

Hi @Vlad Crisan​ , As of now, A workspace is limited to 1000 concurrent job runs. A 429 Too Many Requests response is returned when you request a run that cannot start immediately. The number of jobs a workspace can create in an hour is limited to 10000 (includes “runs submit”).

https://docs.databricks.com/workflows/jobs/jobs.html#create-run-and-manage-databricks-jobs

To increase the jobs limit in a workspace you can refer below:

https://docs.databricks.com/administration-guide/workspace/enable-increased-jobs-limit.html

vladcrisan
New Contributor II

Hi @Debayan Mukherjee​ my question is about the total number of Spark jobs on one cluster and how these can be retrieved in the Spark UI after the cluster shuts down, rather than the number of concurrent jobs. Concrete example: if a notebook running on a cluster produces (e.g. sequentially) in total 3k jobs, after the underlying cluster shuts down, I would only be able to see in the Spark UI approximately 1k jobs. My question is if there is a way to recover all jobs in the Spark UI.

Hubert-Dudek
Esteemed Contributor III

It depends on what data you need. It can be good to integrate with datadog https://www.datadoghq.com/blog/databricks-monitoring-datadog/

You can also redirect logs to Azure analitycs.

I would like to recover all information displayed in the Spark UI. Datadog is a good suggestion, but unfortunately we can't use external services in our application. Azure analytics would be an option, but I couldn't find any reference showing how to recover the full Spark UI through it.

Sandeep
Contributor III

@Vlad Crisan​ , you can use the Databricks clusters to replay the events. Please follow this kb: https://kb.databricks.com/clusters/replay-cluster-spark-events

Note: Please spin up a cluster with version 10.4 LTS.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group