why doesn't databricks allow setting executor metrics
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2025 11:10 AM - edited 02-01-2025 11:19 AM
I have an all-purpose compute cluster that processes different data sets for various jobs. I am struggling to optimize executor metrics like below.
spark.executor.memory 4g
Is it allowed to override default executor metrics and specify such configurations at the cluster level for an all-purpose compute cluster? (in Spark config section under Advance cluster options)
How do I specify such configurations at runtime while submitting a job to a job-compute cluster?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-01-2025 06:27 PM - edited 02-01-2025 06:27 PM
Hello @marc88,
As you mentioned in Spark config under Advance cluster options you can do it 🙂 once cluster boots up it will be set at run level. Or you can draft a cluster policy and apply it across for job computes when creating your workflow.

