Internal error message: Failed to launch cluster in Kubernetes gcp databricks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-15-2022 09:00 AM
we are using data bricks provided default workspace configuration in GCP. we have created 3 gcs buckets and we want to access them by using service principal name through data bricks all-purpose cluster. when we add service principal name in cluster, we are seeing time-out with following error.
Internal error message: Failed to launch cluster in Kubernetes in 1800 seconds. driver readiness:false,last executors readiness ration:1/1 expected executors readiness ratio 0.5
Spark Startup Failure: Spark was not able to start in time. This issue can be caused by a malfunctioning Hive metastore, invalid Spark configurations, or malfunctioning init scripts. Please refer to the Spark driver logs to troubleshoot this issue, and contact Databricks if the problem persists
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-19-2022 07:18 AM
@Kaniz Fatma I am trying to raise support ticket, but it is showing admin permission Issue. I am account owner and admin, but still i am unable raise support ticket. whereas if i raise through email it is going to normal priority and not receiving any response. can you please help on that
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-06-2023 08:54 AM
Hello @karthik p ,
Sorry to send a message in this thread after 3 months, but have you had your issue resolved since then ?
We are facing the exact same one are still trying to get a hold of the support after 10 days.
Regards,
Antoine

