Hi there,
newbie here in Databricks on GCP. I provisioned my Databricks workspace with Terraform and all worked well. Now when I would like to target destroy my workspace, issues occur:
When I do terraform destroy -target module.workspace, the workspace and all things related in Databricks (e.g. metastore assignment, vpc network assignment) are successfully pruned, except for the underlying GKE cluster, GCS buckets that Databricks provisioned when creating the workspace.
Therefore when I would like to destroy the VPC networking resources with another target destroy, it gives out an error
Error when reading or editing Subnetwork: googleapi: Error 400: The subnetwork resource 'projects/data-platform-437607/regions/europe-west1/subnetworks/production-subnet' is already being used by 'projects/data-platform-437607/zones/europe-west1-b/instances/gke-db-4319326960483-system-pool-2024-f207051a-671q', resourceInUseByAnotherResource
This error is expected because the GCP resources are still there, but nowhere in my Terraform config had I created these resources seperately (bc they are automatically provisioned by Databricks during workspace creation).
My question is: is it possible to destroy these remaining Databricks provisioned GCP resources in Terraform or manual deletion is the only way to go? Thank you very much!
Attached is my Terraform config for workspace:
resource "databricks_mws_workspaces" "this" {
provider = databricks.acc
account_id = var.databricks_account_id
workspace_name = var.databricks_workspace_name
location = var.google_compute_subnet_region
cloud_resource_container {
gcp {
project_id = var.google_project
}
}
network_id = var.databricks_mws_network_id
gke_config {
connectivity_type = "PRIVATE_NODE_PUBLIC_MASTER"
master_ip_range = var.gke_master_ip_range
}
token {
comment = "Terraform provisioned workspace ${var.dbx_env}"
}
# this makes sure that the NAT is created for outbound traffic before creating the workspace
depends_on = [var.google_compute_router_nat]
}
#GCP #Terraform #Databricks