cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Deply databricks workspace on azure with terraform - failed state: legacy access

Hil
New Contributor

I'm trying to deploy a workspace on azure via terraform and i'm getting the following error:

"INVALID_PARAMETER_VALUE: Given value cannot be set for workspace~<id>~default_namespace_ws~ because: cannot set default namespace to hive_metastore since legacy access is disabled"

My thought was to set the default namespace to that of the UC. However, this only possible within the workspace itself, which in my case cannot be provisioned. Have anyone dealt with this issue?

To give a bit of context:

- I've setup a UC for the region that is set to be automatically assigned to new workspaces
- When manually provisioning a workspace everything work as expected

Any insight will be helpful, thanks!

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Hil
New Contributor

I found the issue, The setting automatically assigned workspaces to this metastore was checked. Unchecking this and manually assigning the metastore worked.

View solution in original post

4 REPLIES 4

belforte
New Contributor

This error happens because the legacy hive_metastore is disabled, and Terraform is trying to use it as the default namespace.The fix is to set the default namespace to Unity Catalog (UC) in your Terraform config, or leave it unset so UC is applied automatically.

Data Scientist
SQL Server • Power BI • Python • Machine Learning

nayan_wylde
Honored Contributor III

@Hil 

You need to tell Terraform not to set the default namespace to hive_metastore, but instead to your UC catalog (or just leave it unset so Databricks auto-assigns UC).

In Terraform (Databricks provider), the relevant field is workspace.default_namespace:

resource "databricks_mws_workspaces" "this" {
  # … your existing config …
  workspace_name = "my-ws"

  # Force default namespace to your UC catalog
  workspace {
    default_namespace = "my_catalog"   # <-- replace with your actual UC catalog
  }
}

@nayan_wylde, @belforte thanks for the input. Would you know how this would apply with the azure provider? I tried using databricks_default_namespace_setting but it doesn't seem to work:

resource "azurerm_databricks_workspace" "dbw" {
  name                = "${local.org}-dbw-${local.env}"
  resource_group_name = azurerm_resource_group.dbx.name
  location            = azurerm_resource_group.dbx.location
  sku                 = "premium"
}

resource "databricks_default_namespace_setting" "ns" {
  provider = databricks.workspace
  namespace {
    value = "main.default"
  }

  depends_on = [azurerm_databricks_workspace.dbw]
}

To me it seems like that the provisioning is failed before terraform reaches the default namespace settings. The accounts console shows that the UC has been properly been assigned to the failed workspace.

Hil
New Contributor

I found the issue, The setting automatically assigned workspaces to this metastore was checked. Unchecking this and manually assigning the metastore worked.