cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

why doesn't databricks allow setting executor metrics

marc88
New Contributor II

I have an all-purpose compute cluster that processes different data sets for various jobs. I am struggling to optimize executor metrics like below.
spark.executor.memory 4g


Is it allowed to override default executor metrics and specify such configurations at the cluster level for an all-purpose compute cluster? (in Spark config section under Advance cluster options)
How do I specify such configurations at runtime while submitting a job to a job-compute cluster?

 

 

1 REPLY 1

Alberto_Umana
Databricks Employee
Databricks Employee

Hello @marc88,

As you mentioned in Spark config under Advance cluster options you can do it 🙂 once cluster boots up it will be set at run level. Or you can draft a cluster policy and apply it across for job computes when creating your workflow.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group