cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Job queue for pool limit

andyh
New Contributor

I have a cluster pool with a max capacity limit, to make sure we're not burning too extra silicon. We use this for some of our less critical workflow/jobs. They still spend a lot of time idle, but sometimes hit this max capacity limit. Is there a way to get a job to wait for an available pool instance, rather than automatically failing with an

 

instance_pool_error_code:INSTANCE_POOL_MAX_CAPACITY_FAILURE

 

?

1 ACCEPTED SOLUTION

Accepted Solutions

SSundaram
Contributor

Try increasing your max capacity limit and might want to bring down the min number of nodes the job uses.

At the job level try configuring retry and time interval between retries. 

View solution in original post

2 REPLIES 2

karthik_p
Esteemed Contributor

@andyh did you get a chance to check queue in jobs, that may help, will update if we have any other options

SSundaram
Contributor

Try increasing your max capacity limit and might want to bring down the min number of nodes the job uses.

At the job level try configuring retry and time interval between retries. 

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group