cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Get Started Discussions
Start your journey with Databricks by joining discussions on getting started guides, tutorials, and introductory topics. Connect with beginners and experts alike to kickstart your Databricks experience.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

ENDPOINT_NOT_FOUND error coming for /2.0/clusters/list-zones API on Databricks running over GCP

hokam
New Contributor II

Hi,

I am trying to build the ETL data pipelines on databricks workspace that is running over GCP.

For automated cluster creation, when Iยดm trying to access list availability zones REST API of cluster creation, then it is failing with end point not found error response.

Below are the details of the call that I am making.

URL: https://xxxxxx.gcp.databricks.com/api/2.0/clusters/list-zones

Method Type: GET

Response:
{
  "error_code": "ENDPOINT_NOT_FOUND",
  "message": "Could not handle RPC class com.databricks.api.proto.cluster.ListAvailableZones."
}

Any help, on how to fix it?

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Hi @hokamMake sure you use the correct endpoint for the REST API. The endpoint for creating automated clusters and accessing the list availability zones on a Databricks workspace running on GCP differs from other cloud providers. The correct endpoint for GCP is https://<region>.gcp.databricks.com. Also, ensure you have the necessary permissions to access the REST API. You can refer to the Databricks documentation on REST APIs for more information.


To fix the "ENDPOINT_NOT_FOUND" error with the error code and message "Could not handle RPC class com.databricks.api.proto.cluster.ListAvailableZones" when trying to access the list availability zones REST API on a Databricks workspace running on GCP, the following steps can be taken:-

  • Check if the GCP project hosting the Databricks workspace has the required permissions for the Databricks service account (SA) associated with the workspace. The SA should have the following permissions: Compute Storage Admin, Databricks Service IAM Role for Workspace, and Kubernetes Engine Admin.
    - Check if the SA has been removed or its permissions have been changed. If so, grant the required permissions to the SA.
    - Check if the network configuration is correct and the Databricks workspace VPC has proper connectivity to the GCP project hosting the GKE cluster.
    - Check if the firewall rules and VPC routes are correctly configured to allow traffic between the Databricks workspace VPC and the GKE cluster VPC.
    - Check if the DNS resolution is working correctly and the custom DNS, if used, is properly configured.
    - Check if the GKE cluster is healthy and all nodes have registered. If not, delete the GKE cluster and retry the Databricks cluster launch. If the above steps do not resolve the issue, contact Databricks support for further assistance.
 
 
 

 

View solution in original post

2 REPLIES 2

Kaniz_Fatma
Community Manager
Community Manager

Hi @hokamMake sure you use the correct endpoint for the REST API. The endpoint for creating automated clusters and accessing the list availability zones on a Databricks workspace running on GCP differs from other cloud providers. The correct endpoint for GCP is https://<region>.gcp.databricks.com. Also, ensure you have the necessary permissions to access the REST API. You can refer to the Databricks documentation on REST APIs for more information.


To fix the "ENDPOINT_NOT_FOUND" error with the error code and message "Could not handle RPC class com.databricks.api.proto.cluster.ListAvailableZones" when trying to access the list availability zones REST API on a Databricks workspace running on GCP, the following steps can be taken:-

  • Check if the GCP project hosting the Databricks workspace has the required permissions for the Databricks service account (SA) associated with the workspace. The SA should have the following permissions: Compute Storage Admin, Databricks Service IAM Role for Workspace, and Kubernetes Engine Admin.
    - Check if the SA has been removed or its permissions have been changed. If so, grant the required permissions to the SA.
    - Check if the network configuration is correct and the Databricks workspace VPC has proper connectivity to the GCP project hosting the GKE cluster.
    - Check if the firewall rules and VPC routes are correctly configured to allow traffic between the Databricks workspace VPC and the GKE cluster VPC.
    - Check if the DNS resolution is working correctly and the custom DNS, if used, is properly configured.
    - Check if the GKE cluster is healthy and all nodes have registered. If not, delete the GKE cluster and retry the Databricks cluster launch. If the above steps do not resolve the issue, contact Databricks support for further assistance.
 
 
 

 

hokam
New Contributor II

Hi @Kaniz_Fatma , I tried the region based gcp databricks url as well and this is also failing with invalid URL error. 

URLhttps://us-east4.gcp.databricks.com/api/2.0/clusters/list-zones

Response:

{
  "error_code": "400",
  "message": "Invalid URL. Please use the per-workspace URL and try again."
}

Also I have checked all the necessary permissions for SA and it is already available.

One more thing - on GCP workspace URL, list-node-types REST call is working fine and giving the expected result, but list-zones is failing.

Thanks

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group