cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Databricks Oauth errors with terraform deployment in gcp

snazkx
New Contributor

Trying to deploy databricks workspace in GCP using terraform with a customer managed VPC. Only difference from the terraform provider configuration is that I have a pre-created shared VPC in a host project, and a dedicated workspace project with the subnet shared from the shared vpc project.

Based on the docs, I created two custom roles, one for the network project and the other for the workspace project https://docs.databricks.com/gcp/en/admin/cloud-configurations/gcp/permissions

Running with google service account impersonation, and added this service account ( created in the workspace project, and assigned the Workspace related custom role from the link above ), and assigned the service account with the Network project role on the Shared vpc host project


provider "databricks" {
  alias                  = "accounts"
  host                   = "https://accounts.gcp.databricks.com"
  google_service_account = var.databricks_google_service_account # Added manually as Account admin from the console
  account_id             = var.databricks_account_id
}


# Provision databricks network configuration
resource "databricks_mws_networks" "databricks_network" {
  provider     = databricks.accounts
  account_id   = var.databricks_account_id
  network_name = var.vpc_name
  gcp_network_info {
    network_project_id = var.shared_vpc_project
    vpc_id             = data.google_compute_network.vpc.name
    subnet_id          = data.google_compute_subnetwork.subnet.name
    subnet_region      = data.google_compute_subnetwork.subnet.region
  }
}

// Create the workspace/* 
resource "databricks_mws_workspaces" "workspace" {
  provider                = databricks.accounts
  account_id              = var.databricks_account_id
  workspace_name          = var.workspace_name
  location                = data.google_compute_subnetwork.subnet.region
  is_no_public_ip_enabled = false
  cloud_resource_container {
    gcp {
      project_id = var.workapce_project
    }
  }

  token {
    comment = "Terraform Test "
  }

  network_id = databricks_mws_networks.databricks_network.network_id

}

Error below, after the workspace gets stuck in provisioning state for hours on end

Workspace status

Failed

Workspace status message

Workspace failed to launch.

Error: [BAD_REQUEST] UNAUTHENTICATED: GCP request for

'getlamRole' rejected with exception: java. lang.

IllegalStateException: OAuth2Credentials instance does not support refreshing the access token.

An instance with a new access token should be used, or a derived type that supports refreshing..

Please ensure that you are using a valid Auth token.


I have double checked the roles and manually successfully able to deploy the workspace in a matter of minutes from my user account just assigning the custom roles across the two projects, so its seems the roles and permissions are fine. And I have set the provider with the google service account to impersonate. But cannot figure out the problem. Can anyone assist in finding the problem

0 REPLIES 0

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now