cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Troubleshooting Azure Databricks Cluster Pools & spot_bid_max_price Validation Error

singhanuj2803
Contributor

Hope you’re doing well!

I’m reaching out for some guidance on an issue I’ve encountered while setting up Azure Databricks Cluster Pools to reduce cluster spin-up and scale times for our jobs.

Background:
To optimize job execution wait times, I’ve created a cluster pool and a new cluster policy (based on our Job Compute configuration) with the InstancePoolId specified. The goal is to ensure all jobs launched under this policy utilize the pool.

Issue:
When attaching a job to this new policy, I’m facing the following validation error: 

Cluster validation error: Validation failed for azure_attributes.spot_bid_max_price from pool, the value must be present.

What I’ve tried so far:

  1. I reviewed the azure_attributes.spot_bid_max_price parameter, which is optional and controls the max price for spot instances.

  2. In our default Job Compute policy, this property is hidden and set to 100 ($100/hr per CPU).

  3. Based on documentation, I attempted to set azure_attributes.spot_bid_max_price = -1 in the cluster policy (to indicate no max price limit), but the error persists.

It seems the issue might be related to how this property is inherited or validated at the pool level, but I haven’t been able to resolve it yet.

Could you please help with the following:

  1. Are there specific steps to configure spot_bid_max_price at the pool level to align with the cluster policy?

  2. Is there a known workaround or best practice for defining this property when using instance pools?

  3. Would you recommend switching the pool to use on-demand instances instead to avoid spot-related validation?

4 REPLIES 4

Poorva21
New Contributor

1. Explicitly define spot_bid_max_price in the instance pool

If your pool uses Spot instances, check that the pool configuration includes a valid value.

How to check:

Go to Compute → Instance Pools

Edit the pool

Confirm that spot_bid_max_price is visible and set

If it’s not set, add:

spot_bid_max_price = 100

or whatever value your organization prefers.

2. Override spot_bid_max_price in the cluster policy

If the pool does not enforce the value, the policy must.

Example (fixed value):

"azure_attributes.spot_bid_max_price": {

"type": "fixed",

"value": 100

}

Example (allow a range):

"azure_attributes.spot_bid_max_price": {

"type": "range",

"minValue": 0,

"maxValue": 100

}

This ensures the cluster always satisfies the required attribute.

3. Remove spot configuration from the pool and let the policy control it

If the pool has conflicting spot settings, you can simplify:

Edit the instance pool

Remove any explicit spot_bid_max_price

Set the correct value only in the cluster policy

This avoids double-definition.

4. Check Databricks Runtime version and policy behavior

Older DBR versions sometimes behave differently with inherited policy fields.

If possible:

Use DBR 11.x or newer

Test with a simplified policy first (no hidden attributes)

Then re-apply constraints gradually

Thanks for commenting. I followed your suggestion and In my Databricks Workspace I went to Compute → Instance Pools but I cannot locate spot_bid_max_price configuration

singhanuj2803
Contributor

Below is the cluster policy JSON, with all possible fixes I have applied however issue persists. Changes clearly highlighted in comments

{
"cluster_type": {
"type": "fixed",
"value": "job"
},

"spark_conf.spark.databricks.cluster.profile": {
"type": "forbidden",
"hidden": true
},

"spark_version": {
"type": "unlimited",
"defaultValue": "auto:latest-lts"
},

// FIX #2 — node_type_id must be forbidden when using instance pools
"node_type_id": {
"type": "forbidden", // <-- CHANGED: was "unlimited"
"hidden": true // <-- Added to avoid UI prompts
},

"num_workers": {
"type": "unlimited",
"defaultValue": 4,
"isOptional": true
},

"azure_attributes.availability": {
"type": "unlimited",
"defaultValue": "ON_DEMAND_AZURE"
},

// FIX #1 — spot_bid_max_price must NOT be hidden AND must be explicitly set
"azure_attributes.spot_bid_max_price": {
"type": "fixed",
"value": -1, // <-- same value, but valid only if exposed
"hidden": false // <-- CHANGED: was hidden by default
},

// Using your existing pool
"instance_pool_id": {
"type": "fixed",
"value": "0716-064827-deft298-pool-xzsz66cy",
"hidden": true
},

// Driver also uses the pool
"driver_instance_pool_id": {
"type": "fixed",
"value": "0716-064827-deft298-pool-xzsz66cy",
"hidden": true
},

"autoscale.min_workers": {
"type": "unlimited",
"defaultValue": 1
},

"autoscale.max_workers": {
"type": "unlimited",
"defaultValue": 6
}
}

Poorva21
New Contributor

Possible reasons:

1. Setting spot_bid_max_price = -1 is not accepted by Azure pools

Azure Databricks only accepts:

0 → on-demand only

positive numbers → max spot price

-1 is allowed in cluster policies, but not inside pools, so validation never completes.

2. Pools always override cluster policy spot settings

Instance pools always dictate:

VM type (Spot / On-demand)

spot bid price

node type

Cluster policies cannot override these.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now