I have an all-purpose compute cluster that processes different data sets for various jobs. I am struggling to optimize executor metrics like below.
spark.executor.memory 4g
Is it allowed to override default executor metrics and specify such configurations at the cluster level for an all-purpose compute cluster? (in Spark config section under Advance cluster options)
How do I specify such configurations at runtime while submitting a job to a job-compute cluster?