Best practices for 3-layer access control in Databricks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā01-10-2026 12:15 PM
Identity and access management model for Databricks and want to implement a clear 3-layer authorization approach:
Account level: Account RBAC roles (account admin, metastore admin, etc.)
Workspace level: Workspace roles/entitlements + workspace ACLs (clusters, jobs, repos, notebooks, SQL, etc.)
Data level: Unity Catalog privileges (catalog/schema/table, external locations, storage credentials, row/column security)
Iām also trying to enforce separation of duties (human admins vs automation/service principals; platform admins vs data stewards vs analysts/engineers).
Has anyone implemented this model ?
What role/group structure worked well?
What pitfalls did you hit (over-permissioning, operational overhead, break-glass)?
Any reference patterns (RACI, example group design, policy framework) you can share?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā01-12-2026 08:34 AM - edited ā01-12-2026 08:34 AM
Great question.
While there are some general best practices, a lot of it comes down to how your organization already does some type of governance, when it comes to "deployment" and also "data governance".
For example Org1 might not already have a proper Governance strategy and so are learning via Databricks' permission model vs Org2 having a very structured way of doing this and "fitting" Databricks' permission model into their existing pattern.
A few years ago, I co-authored these blogs which tried to establish a mental model. [1] https://www.databricks.com/blog/2022/08/26/databricks-workspace-administration-best-practices-for-ac... and [2] https://www.databricks.com/blog/2022/11/22/serving-primer-unity-catalog-onboarding.html The second one links to a PDF of a worksheet which might help orient you.
That being said the platform changes quickly and Databricks introduces new permissions and roles from time to time so you always have to think of it as an iterative model vs something set in stone.
One thing I did want to address is that there are several organizations that leave the metastore admin role "empty". ie the metastore is "owned" by a group but members are added to that group on a just-in-time basis, either manually or via automation (if you do this you can make it an SP vs a user). That way you can link it to existing "request access" processes in your own org. You could of course do this for catalog owners and schema owners but that might be over-engineering it.
Feel free to reply and we can keep the discussion going.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā01-29-2026 12:27 PM
Hi Mojama
Can you please on below query
Databricks Workspace ACL Enforcement ā How to Prevent Users from Creating Objects Outside Team Folder and Attaching to Shared Clusters?
Background
I am configuring workspace-level access control in Databricks to restrict Data Engineers (DE group) to operate only inside a dedicated team folder and to prevent unintended compute usage.
Here is the setup I implemented:
Configuration Details
Identity & Group Setup
Created a user and added the user to DE-grp
Assigned Workspace User role to DE-grp
Applied default workspace entitlements
Folder Permissions
Created a workspace folder:
/Team/DatabricksEngineeringAssigned CAN MANAGE permission to DE-grp
No permissions were granted to DE-grp on other workspace folders
Compute Permissions
Admin created shared clusters
Granted Attach To permission to DE-grp
DE-grp does NOT have permission to create clusters
Expected Behavior
I expected the following:
Users in DE-grp should only create and manage notebooks inside:
/Team/DatabricksEngineeringUsers should NOT be able to:
Create workspace objects outside this folder
Attach notebooks to shared clusters unless explicitly allowed
Observed Behavior (Problem)
When logging in as a DE-grp user:
Workspace Object Scope Leak
User can still create notebooks inside their personal Home folder:
/Users/<username>Since the user owns their Home folder, they automatically get CAN MANAGE permission
This bypasses folder-based governance
Compute Access Gap
Even though the user cannot create clusters:
They can still attach their notebooks which they created in Home folder to existing shared clusters and execute code
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā01-18-2026 07:21 AM
Thank you for the detailed explanation and for sharing the reference blogs, I may follow up once we complete our initial design draft.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
ā01-20-2026 07:13 AM
Here is a high level RACI chart.
Capability | Platform Admins | Data Stewards (Domain) | Data Engineers (Domain) | Analysts/BI | Security/Compliance |
Account setup / workspaces | R/A | C | I | I | C |
Metastore / locations / creds | R/A | C | I | I | C |
Catalog/Schema design (per domain) | I | R/A | C | I | C |
Grants (UC) per domain | I | R/A | C | I | C |
ETL pipelines (jobs, DLT) | I | C | R/A | I | I |
Row/Column policies | C | C | I | I | R/A |
Workspace entitlements | R/A | I | I | I | C |
Monitoring & audits | R | C | I | I | A |
Breakāglass | R/A | I | I | I | C |
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
4 weeks ago
Hi @APJESK,
Your 3-layer model (Account RBAC, Workspace ACLs, Unity Catalog privileges) is the right framework. I want to address both the overall design and the specific follow-up you posted about the Home folder and compute behavior, since those are common points of confusion.
ADDRESSING THE HOME FOLDER "SCOPE LEAK"
This is expected behavior, not a bug. Every user automatically has CAN MANAGE on their own /Users/<username> home folder. This is by design and cannot be revoked. There is no workspace setting to prevent users from creating notebooks in their home directory.
The practical approach most organizations take:
1. Use cluster policies and compute permissions as the enforcement boundary, not folder restrictions. Even if a user creates a notebook in their home folder, the notebook cannot do anything harmful unless it can attach to compute and that compute has access to data.
2. Remove the "Allow unrestricted cluster creation" entitlement from the group. You mentioned you did this already, which is correct.
3. Use cluster policies to enforce what configurations are available. Create a policy that locks down the settings you need and assign only that policy to the DE-grp. Users who cannot create unrestricted clusters can only create clusters that match an assigned policy.
4. Accept that the home folder is a "scratch pad." Many organizations treat the home folder as a personal sandbox. The real governance boundary is the data layer (Unity Catalog) and the compute layer (cluster policies and ACLs).
ADDRESSING THE COMPUTE ACCESS GAP
You noted that users in DE-grp can attach notebooks from their home folder to shared clusters they have CAN ATTACH TO permission on. This is also expected behavior. CAN ATTACH TO grants the ability to attach any notebook the user can access (including their own home folder notebooks) to that cluster.
To tighten this:
1. Rely on Unity Catalog for data protection. Even if a user attaches a personal notebook to a shared cluster, Unity Catalog enforces what data they can read/write based on their identity. The notebook itself does not grant additional data permissions.
2. Use Shared access mode clusters. With shared access mode (formerly "High Concurrency" with Unity Catalog enabled), each user's queries run under their own identity. This means row filters, column masks, and GRANT/REVOKE all apply per user, regardless of which folder the notebook lives in.
3. Restrict the CAN ATTACH TO permission more narrowly. Rather than giving the entire DE-grp CAN ATTACH TO on all shared clusters, consider creating specific clusters for specific teams and only granting CAN ATTACH TO to the relevant group.
4. Audit with system tables. Use the audit log system tables (system.access.audit) to monitor who is running what and on which clusters. This gives you detective controls in addition to preventive controls.
RECOMMENDED GROUP STRUCTURE
Building on the RACI that @nayan_wylde shared, here is a group design pattern that works well:
Account-level groups (synced from IdP via SCIM):
- platform-admins : Account admin role, metastore admin - security-compliance : Audit log access, row/column policy owners - domain-data-stewards : Catalog/schema owners per domain - domain-data-engineers : Workspace users, compute access via policy - domain-analysts : SQL warehouse access, read-only on curated schemas - automation-sps : Service principals for CI/CD and production jobs
Workspace-level entitlements per group:
- platform-admins : Workspace admin - domain-data-engineers : Workspace access, Databricks Repos - domain-analysts : Workspace access, Databricks SQL access - automation-sps : Workspace access (no interactive login)
Unity Catalog grants per group:
- domain-data-stewards : USE CATALOG, USE SCHEMA, MANAGE on their domain catalog - domain-data-engineers : USE CATALOG, USE SCHEMA, SELECT/MODIFY on dev/staging schemas - domain-analysts : USE CATALOG, USE SCHEMA, SELECT on curated/gold schemas - automation-sps : Full privileges on production schemas (owner pattern)
SEPARATION OF DUTIES: HUMANS VS SERVICE PRINCIPALS
A pattern that works well for separation of duties:
1. Production writes should only come from service principals. Human users should never have MODIFY or CREATE TABLE on production schemas. Service principals owned by the automation-sps group run production jobs.
2. Metastore admin as a JIT (just-in-time) role. As @MoJaMa mentioned, keep the metastore admin group empty and add members only when needed. You can automate this with a service principal that adds/removes members and logs the action.
3. Break-glass process. Create a dedicated break-glass service principal stored in a secure vault. Its credentials are only retrieved during emergencies, and every retrieval triggers an alert. Add it to the metastore admin group only during the incident and remove it after.
COMMON PITFALLS TO WATCH FOR
1. Over-permissioning via ALL PRIVILEGES. Avoid GRANT ALL PRIVILEGES, as it grants every current privilege type and makes it hard to audit what a group can actually do.
2. Workspace catalog default grants. When a workspace catalog is created, workspace users automatically receive broad default privileges (CREATE TABLE, CREATE VOLUME, etc.). Review and revoke these if they conflict with your governance model.
3. Not using USE CATALOG / USE SCHEMA. Users need both USE CATALOG on the parent catalog AND USE SCHEMA on the schema to access objects. Forgetting one of these is a common source of access issues.
4. Account groups vs workspace-local groups. Always use account-level groups (synced from your IdP). Workspace-local groups cannot be used with Unity Catalog privileges or across workspaces.
KEY DOCUMENTATION REFERENCES
- Access control overview:
https://docs.databricks.com/aws/en/security/auth/access-control/index.html
- Unity Catalog privileges and securables:
https://docs.databricks.com/aws/en/data-governance/unity-catalog/manage-privileges/index.html
- Cluster policies:
https://docs.databricks.com/aws/en/admin/clusters/policies.html
- Identity and access management best practices:
https://docs.databricks.com/aws/en/admin/users-groups/best-practices.html
- Audit log system tables:
https://docs.databricks.com/aws/en/admin/system-tables/audit-logs.html
- Workspace administration best practices blog:
https://www.databricks.com/blog/2022/08/26/databricks-workspace-administration-best-practices-for-ac...
- Unity Catalog onboarding primer blog:
https://www.databricks.com/blog/2022/11/22/serving-primer-unity-catalog-onboarding.html
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.
If this answer resolves your question, could you mark it as "Accept as Solution"? That helps other users quickly find the correct fix.