In any Spark application, Spark driver plays a critical role and performs the following functions:
1. Initiating a Spark Session
2. Communicating with the cluster manager to request resources (CPU, memory, etc) from the cluster manager for Spark's executors (JVMs)
3. Transforming all the Spark operations into DAG computations
4. Scheduling and distributing DAG computations as tasks across the Spark executors
5. Communicating with Spark executors
Avoiding overloading your Spark driver / driver failure is absolutely necessary to maintain a high SLA for your Spark applications.
It is recommended to distribute your workloads into different smallish clusters instead of running many applications in A big cluster, as no matter how big the cluster is, the functionalities of the Spark driver cannot be distributed within a cluster.
#dataengineering #apachespark