cancel
Showing results for 
Search instead for 
Did you mean: 
Data Governance
Join discussions on data governance practices, compliance, and security within the Databricks Community. Exchange strategies and insights to ensure data integrity and regulatory compliance.
cancel
Showing results for 
Search instead for 
Did you mean: 

Failing Cluster Creation

jv_v
New Contributor III

I'm encountering an issue with my Terraform code for creating a cluster. The terraform plan command runs successfully and shows the correct changes, but the after that fails with errors. Here are the details:

jv_v_0-1719500677324.pngjv_v_1-1719500696573.png

 

Terraform Code:

terraform {

required_providers {
azurerm = {
source = "hashicorp/azurerm"
}
databricks = {
source = "databricks/databricks"
version = "1.46.0"

}
}
}
provider "azurerm" {
skip_provider_registration="true"
features {}
subscription_id = var.subscription_id
client_id = var.client_id
client_secret = var.client_secret
tenant_id = var.tenant_id
}
locals {
databricks_workspace_host = module.metastore_and_users.databricks_workspace_host
workspace_id = module.metastore_and_users.databricks_workspace_id
}

// Provider for databricks account
provider "databricks" {
alias = "azure_account"
host = "https://accounts.azuredatabricks.net"
account_id = var.account_id
client_id = var.client_id
client_secret = var.db_client_secret
}

// Provider for databricks workspace
provider "databricks" {
alias = "Workspace"
host = local.databricks_workspace_host
client_id = var.client_id
client_secret = var.db_client_secret
}

//Task014 Creating the cluster with the "smallest" amount

data "databricks_node_type" "smallest" {
local_disk = true
}
#defined policy
data "databricks_cluster_policy" "personal" {
name = "Personal Compute"
}

# Long Term Support (LTS) version.
data "databricks_spark_version" "latest_lts" {
long_term_support = true
}

resource "databricks_cluster" "mycluster" {
provider = databricks.Workspace
cluster_name = var.cluster_name
policy_id = data.databricks_cluster_policy.personal.id
node_type_id = var.node_type_id # Set the appropriate node type ID here
spark_version = data.databricks_spark_version.latest_lts.id
autotermination_minutes = var.cluster_autotermination_minutes
num_workers = var.cluster_num_workers
data_security_mode = var.data_security_mode
autoscale {
min_workers = var.min_workers
max_workers = var.max_workers
}

spark_conf = {
"spark.databricks.catalog.enabled" = "true"
}
}

 

Could someone help me understand why the terraform Plan is failing after a successful plan? Any suggestions for how to debug or fix this issue would be greatly appreciated.

1 REPLY 1

jacovangelder
Contributor III

Are you getting two different errors?
default auth error usually means you you need to explicitly set the providers in either the data or resource objects as well, or you're missing a depends_on attribute. I think for both cases it is the latter. 

i.e.

data "databricks_cluster_policy" "personal" {
depends_on = azurerm_databricks_workspace.example
name = "Personal Compute"
}

same goes for databricks_cluster_policy.

Can you give this a try? 

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!