Hi @vjussiiih,
Let me walk you through this. You are correct that DABs treat schemas and catalogs defined under "resources" as fully managed resources, which means they attempt to create them on first deploy and manage their full lifecycle. This is what causes both the "Schema already exists" error and the destructive delete/recreate warnings you are seeing.
The good news is there is a supported way to handle this. Here are the approaches depending on your situation:
APPROACH 1: USE "bundle deployment bind" FOR EXISTING SCHEMAS (RECOMMENDED)
The Databricks CLI supports a "bind" command that links a bundle-defined resource to an existing resource in your workspace. This tells DAB "this resource already exists, manage it going forward instead of trying to create a new one." Critically, bind does not recreate data or the resource itself.
Step 1 - Define the schema in your bundle YAML with the grants you want:
resources:
schemas:
schema_name:
name: schema_name
catalog_name: catalog_name
grants:
- principal: some_principal_name
privileges:
- USE_SCHEMA
- SELECT
Step 2 - Bind the bundle resource to the existing schema:
databricks bundle deployment bind schema_name catalog_name.schema_name -t your_target
The first argument ("schema_name") is the resource key you used in your YAML. The second argument is the full name of the existing schema in your workspace.
Step 3 - Deploy:
databricks bundle deploy -t your_target
After binding, the deploy will update the existing schema with your grant definitions instead of trying to create a new one or destroying the existing one.
The bind command supports the following resource types: app, cluster, dashboard, job, model_serving_endpoint, pipeline, quality_monitor, registered_model, schema, and volume.
APPROACH 2: SQL TASK IN A DAB JOB (FOR CATALOGS OR GRANT-ONLY MANAGEMENT)
Since the bind command does not currently support catalogs, and since you mentioned you do not have CREATE CATALOG privileges on the metastore, a SQL task within a DAB-managed job is a practical alternative. This approach works for both catalogs and schemas, and it keeps your grants versioned in source control without DAB managing the lifecycle of the UC objects.
resources:
jobs:
apply_uc_grants:
name: "apply-uc-grants"
tasks:
- task_key: "grant_permissions"
sql_task:
warehouse_id: ${var.warehouse_id}
file:
path: ./sql/apply_grants.sql
variables:
warehouse_id:
description: "SQL warehouse ID"
lookup:
warehouse: "your-warehouse-name"
Then create a file at sql/apply_grants.sql in your bundle:
-- Catalog-level grants
GRANT USE_CATALOG ON CATALOG catalog_name TO `some_principal_name`;
GRANT CREATE_SCHEMA ON CATALOG catalog_name TO `some_principal_name`;
-- Schema-level grants
GRANT USE_SCHEMA ON SCHEMA catalog_name.schema_name TO `some_principal_name`;
GRANT SELECT ON SCHEMA catalog_name.schema_name TO `some_principal_name`;
You can then run this job after deployment with "databricks bundle run apply_uc_grants" or schedule it to run periodically to enforce your grants. The identity running the SQL must have MANAGE or ownership on the target objects.
APPROACH 3: USE THE DATABRICKS TERRAFORM PROVIDER (IF YOU PREFER TERRAFORM)
Since your title mentions Terraform, it is worth noting that the Databricks Terraform provider has a "databricks_grants" resource that is designed specifically for this use case. It manages only the grants on an existing object without managing the object lifecycle:
resource "databricks_grants" "schema_grants" {
schema = "catalog_name.schema_name"
grant {
principal = "some_principal_name"
privileges = ["USE_SCHEMA", "SELECT"]
}
grant {
principal = "another_principal"
privileges = ["USE_SCHEMA", "SELECT", "MODIFY"]
}
}
resource "databricks_grants" "catalog_grants" {
catalog = "catalog_name"
grant {
principal = "some_principal_name"
privileges = ["USE_CATALOG"]
}
}
The Terraform "databricks_grants" resource does not attempt to create or destroy the catalog or schema. It only manages the permissions. This is documented here:
https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/grants
KEY THINGS TO NOTE
1. CLI version: Make sure you are using a recent version of the Databricks CLI. Schema binding has been available since v0.243.0. Run "databricks --version" to check.
2. Grants are declarative: When DAB applies grants, it sets them to exactly what you specify. Test in a non-production environment first to understand how this interacts with existing grants on the object.
3. Permissions required: The identity running the deploy or SQL must have sufficient privileges (typically ownership or MANAGE) on the target catalog/schema to grant permissions.
4. Feature request for "data sources": There is an open feature request on GitHub (https://github.com/databricks/cli/issues/3460) for a "sources" or "bind: false" concept in DABs that would let you reference existing resources without DAB owning them. This would make your exact use case even simpler in the future.
DOCUMENTATION REFERENCES
- Bundle deployment bind command:
https://docs.databricks.com/en/dev-tools/cli/bundle-commands.html
- Databricks Asset Bundles resources:
https://docs.databricks.com/en/dev-tools/bundles/resources.html
- Unity Catalog privileges reference:
https://docs.databricks.com/en/data-governance/unity-catalog/manage-privileges/privileges.html
- Terraform databricks_grants resource:
https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/grants
- GitHub feature request for data sources in DABs:
https://github.com/databricks/cli/issues/3460
Hope this helps! Let me know if you have any questions about the bind workflow or the SQL task approach.
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.