- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:00 AM
I am having to pass ownership of pipelines to users for them to read materialised views associated with any pipeline otherwise they get a 'User does not have SELECT on table...' error. This is obviously bonkers as any pipeline can only have one owner and you cant give ownership to groups. I have given users ALL PRIVILEGE on the schema the materialised views are stored in but when they try to read the view it wants access to the underlying files hence the error.
This is a new error by the way as the same pipelines/materialised views were accessible by the same users/clusters a month ago. So something has changed recently in Databricks to break this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:24 AM
Hi @dtb_usr
Good day!!
Materialized views created by DLT now enforce file-level access using the identity of the pipeline owner. This means that any file-level operations—such as refreshes or updates—on materialized views are performed using the credentials and privileges of the pipeline owner, ensuring consistent access enforcement.
How Identity Enforcement Works
- When a DLT pipeline creates and maintains materialized views (MVs), all file-level access is executed as the pipeline owner.
- If the pipeline owner changes (for example, if ownership is transferred due to the owner leaving the company or through a managed process), all related assets—including materialized views—are re-owned by the new pipeline owner. Future DLT runs and file accesses are performed as this new owner.
- These enforcement rules ensure that:
- Access checks for backing data happen using the current pipeline owner’s credentials.
- After ownership changes, historical grants (such as SELECT on existing MVs) and other permissions remain intact, but future operations are controlled by the new owner.
- This approach eliminates the risk of lingering access if the original owner leaves the organization and centralizes control, making it clear which identity is responsible for access.
Important Limitations & Permissions
- To change the pipeline owner (and thus the identity used for file-level access), a user must have both workspace admin and metastore admin privileges.
- The new owner immediately becomes responsible for all DLT-managed resources, and all file-level accesses by DLT MVs are run with their privileges.
- Non-admin users and pipeline owners themselves cannot arbitrarily transfer ownership to others, reducing risk of privilege escalation.
- Pipeline ownership must always map to a single user (not a group). Service principal support is planned but not yet available for all configurations.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:38 AM
Hi Saritha_S,
Thanks for the reply. I have understood your point on ownership of the pipeline. My issue is with access to the materialised view. Are you saying that materialised views created created by a pipeline can only be viewed and read by the pipeline owner and no other user?
How do I give other users (who are not the pipeline owner) access to the materialised views created by my pipeline? I have all my materialised views created by my pipelines stored in a schema, I assumed if I gave users ALL PRIVILEGE permission to this schema they would be able to read any view/table inside the schema but this is obviously not the case. So how do I resolve this issue?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:45 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:50 AM
[INSUFFICIENT_PERMISSIONS] Insufficient privileges: User does not have SELECT on Table
'catalogue_name. schem_name. __materialization_mat_xxx_xxx_materialised_view_name'. SQLSTATE: 42501
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 02:56 AM
Hi @dtb_usr
Thank you for sharing the error. I understand now. Can you raise a support ticket with the databricks team we need to dig deeper into the issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 03:57 AM
Hi @dtb_usr
Can you run the query the same MV from a SQL Warehouse or Shared compute?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2026 07:07 AM
Hi,
So the query runs fine in SQL Editor but not in notebooks with personal cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2026 01:34 AM
The query is not working when using dedicated cluster so please raise a support ticket with the databricks support team.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-05-2026 02:14 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
2 weeks ago
Hi @dtb_usr,
Based on the error message and the fact that the query works in the SQL Editor (which uses a SQL warehouse) but fails on a personal/dedicated cluster in notebooks, this is almost certainly a compute access mode issue rather than a Unity Catalog grants issue.
WHAT IS HAPPENING
Materialized views created by Lakeflow Spark Declarative Pipelines (SDP, formerly known as DLT) have specific compute requirements for querying. The key distinction is the type of compute being used:
- SQL warehouses: fully supported for all users with SELECT privilege
- Standard access mode compute (formerly shared): fully supported for all users with SELECT privilege
- Dedicated access mode compute (personal/single-user clusters): supported ONLY under specific conditions
For dedicated access mode clusters, the rules are:
1. If you ARE the materialized view owner (which is the pipeline owner): Databricks Runtime 14.3 or above is required
2. If you are NOT the owner: Databricks Runtime 15.4 or above is required, AND the workspace must be enabled for serverless compute
This is likely why your users started seeing the error recently. Databricks has been tightening the enforcement of access controls on dedicated compute, and the behavior you relied on previously (where any user could query MVs from a personal cluster) has been updated to enforce the documented requirements.
HOW TO RESOLVE THIS
You have a few options:
OPTION 1: Switch users to Standard access mode compute (recommended)
Standard access mode clusters support multi-user access with full Unity Catalog enforcement and allow any user with SELECT privilege to query materialized views. This is the recommended approach going forward.
OPTION 2: Upgrade dedicated clusters to DBR 15.4+
If your users must use dedicated (single-user) clusters, ensure:
- The clusters are running Databricks Runtime 15.4 or above
- Your workspace has serverless compute enabled
OPTION 3: Use SQL warehouses for querying
If users are only reading data (SELECT queries), SQL warehouses are fully supported and handle all the permission enforcement correctly.
VERIFYING YOUR GRANTS
Your grants on the schema should be sufficient. For completeness, make sure each user has:
GRANT USE CATALOG ON CATALOG catalogue_name TO `user_or_group`;
GRANT USE SCHEMA ON SCHEMA catalogue_name.schema_name TO `user_or_group`;
GRANT SELECT ON SCHEMA catalogue_name.schema_name TO `user_or_group`;
Note that ALL PRIVILEGES should cover these, but it is worth confirming the grants are in place at the catalog level as well.
DOCUMENTATION REFERENCES
Requirements for querying materialized views:
https://docs.databricks.com/en/views/materialized.html
Lakeflow Spark Declarative Pipelines with Unity Catalog:
https://docs.databricks.com/en/delta-live-tables/unity-catalog.html
Compute access modes:
https://docs.databricks.com/en/compute/configure.html
* This reply used an agent system I built to research and draft this response based on the wide set of documentation I have available and previous memory. I personally review the draft for any obvious issues and for monitoring system reliability and update it when I detect any drift, but there is still a small chance that something is inaccurate, especially if you are experimenting with brand new features.