we are using data bricks provided default workspace configuration in GCP. we have created 3 gcs buckets and we want to access them by using service principal name through data bricks all-purpose cluster. when we add service principal name in cluster, we are seeing time-out with following error.
Internal error message: Failed to launch cluster in Kubernetes in 1800 seconds. driver readiness:false,last executors readiness ration:1/1 expected executors readiness ratio 0.5
Spark Startup Failure: Spark was not able to start in time. This issue can be caused by a malfunctioning Hive metastore, invalid Spark configurations, or malfunctioning init scripts. Please refer to the Spark driver logs to troubleshoot this issue, and contact Databricks if the problem persists