cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Error When Starting the Cluster

AmineHY
Contributor

I am having this error when running my cluster, any idea why?

1 ACCEPTED SOLUTION

Accepted Solutions

NandiniN
Databricks Employee
Databricks Employee

Hello @Amine HADJ-YOUCEF​ 

“SUBNET_EXHAUSTED_FAILURE” are because VMs on unhealthy partitions cannot be freed, so the resources and associated quota cannot be released, and you are running out of quota.

Your vnet/subnet is out of non-occupied IPs and this can be fixed by allocating more IPs to your network address space.

Each cluster requires it's own IP, so if there are none available, it simply cannot start.

  1. Increase the size of the subnet: You can try to expand the IP address range of the subnet by increasing the subnet mask. For example, if your subnet is currently using a /26 mask (64 IP addresses), you could increase it to a /25 mask (128 IP addresses). This will give you more IP addresses to work with.
  2. Create a new subnet: If expanding the current subnet is not feasible, you can create a new subnet with a larger IP address range and assign it to the NIC.
  3. Delete unused resources: You may want to check if there are any unused resources, such as unassigned IP addresses or idle VMs, that are taking up space in the subnet. You can remove these resources to free up IP addresses.
  4. Use a different IP address range: If none of the above options work, you can try using a different IP address range altogether. This will require changing the IP address ranges of all resources that are using the current subnet.
  5. Contact Cloud Provider Support.

Hope this helps.

Thanks & Regards,

Nandini

View solution in original post

4 REPLIES 4

yogu
Honored Contributor III

#DAIS2023​ i guess If your subnet is too small to accommodate all the required network interfaces, you can try increasing its size by adding more IP addresses to it.

NandiniN
Databricks Employee
Databricks Employee

Hello @Amine HADJ-YOUCEF​ 

“SUBNET_EXHAUSTED_FAILURE” are because VMs on unhealthy partitions cannot be freed, so the resources and associated quota cannot be released, and you are running out of quota.

Your vnet/subnet is out of non-occupied IPs and this can be fixed by allocating more IPs to your network address space.

Each cluster requires it's own IP, so if there are none available, it simply cannot start.

  1. Increase the size of the subnet: You can try to expand the IP address range of the subnet by increasing the subnet mask. For example, if your subnet is currently using a /26 mask (64 IP addresses), you could increase it to a /25 mask (128 IP addresses). This will give you more IP addresses to work with.
  2. Create a new subnet: If expanding the current subnet is not feasible, you can create a new subnet with a larger IP address range and assign it to the NIC.
  3. Delete unused resources: You may want to check if there are any unused resources, such as unassigned IP addresses or idle VMs, that are taking up space in the subnet. You can remove these resources to free up IP addresses.
  4. Use a different IP address range: If none of the above options work, you can try using a different IP address range altogether. This will require changing the IP address ranges of all resources that are using the current subnet.
  5. Contact Cloud Provider Support.

Hope this helps.

Thanks & Regards,

Nandini

-werners-
Esteemed Contributor III

Are you sure you can change the subnet size on an existing Databricks environment? I was always told that this is not possible.

NandiniN
Databricks Employee
Databricks Employee

@Werner Stinckens​ ,

I checked again, you cannot change them after your workspace is deployed. The only way right now is to recreate the workspace and migrate. It’s not possible to update CIDR range right now without migration.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group