Thursday
Hi,
I have ran into a problem when creating materialized view.
Here's my simple query I'm trying to run:
%sql
create or replace materialized view catalog.schema.mView_test
as select * from catalog.schema.table limit 10;I'm getting following error:
Encountered an error with Unity Catalog while setting up the pipeline on cluster xxxx-xxxxxx-xxxxxxxx-xxx.
Ensure that your Unity Catalog configuration is correct, and that required resources (e.g., catalog, schema) exist and are accessible.
Also verify that the cluster has appropriate permissions to access Unity Catalog.
Details: Operation failed: "This request is not authorized to perform this operation.", 403, GET, https://storageaccount.dfs.core.windows.net/container?upn=false&beginFrom=0000000000000000000&resource=filesystem&maxResults=5000&directory=catalog/schema/__unitystorage/schemas/9e94de07-1f9d-4798-b250-d34f6f2b769d/tables/b34a78c7-fbcc-4265-a331-da4372e59afc/_delta_log&timeout=90&recursive=false&st=2026-04-16T07:00:09Z&sv=2020-02-10&ske=2026-04-16T09:00:09Z&sig=XXXXX&sktid=2fb08174-a150-479d-8d15-2174da71a11a&se=2026-04-16T08:17:22Z&sdd=7&skoid=1456c2e6-8869-41a4XXXXXXXXXXXXXXXXXX&spr=https&sks=b&skt=2026-04-16T07:00:09Z&sp=racwdxlm&skv=2025-01-05&sr=d, AuthorizationFailure, , "This request is not authorized to perform this operation. RequestId:1b89a867-b01f-0056-1b71-cdf6f8000000 Time:2026-04-16T07:17:26.4365052Z"I'm running the query on our own SQL Warehouse, not serverless SQL warehouse.
I have made sure the following:
My suspicion is that the culprit here is that the materialized views are backed by serverless pipeline even I'm not using serverless compute to run my notebook. Could this be the issue here? If so, how do I fix this?
Thursday
Hi @PNC,
I don't think it has to do with the serverless compute to run the notebook. I'm just wondering if it's related to your access to the underlying storage.
Can you try the below.
In Catalog Explorer, open catalog → schema → check the Storage / managed location section and note which storage credential is attached.
In the Databricks account console, open that storage credential and note which access connector / managed identity / service principal it uses.
In the Azure Portal for storageaccount:
Also, can you confirm you can read the base table from the same cluster/warehouse?
Just run something like the below.
SELECT COUNT(*) FROM catalog.schema.table;
You may also want to check and ensure compute is UC-compatible (shared cluster or SQL warehouse. Not legacy/no-isolation single-user only). If other UC tables in this catalog work from this compute, you’re probably fine.
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Thursday
SELECT COUNT(*) FROM catalog.schema.table;Thursday
Hi @PNC,
Thanks for checking...
I think your setup is very close. The missing piece is which identity is actually used for the MV backing storage, which is not necessarily the same as the one behind your external location.
Because you’re already seeing a 403 from ADLS for the __unitystorage path, the serverless MV pipeline is actually starting, which is good. The failure is now purely an Azure Storage authorisation problem, not a serverless problem.
SELECT COUNT(*) FROM catalog.schema.table reads from your external location abfss://my-container@my-storage-account.dfs.core.windows.net/catalog using storage credential my_credential (backed by my-access-connector), which has Blob Data Contributor. So, it works.
Your create MV query writes MV data under the schema’s managed location (.../catalog/schema/__unitystorage/schemas/...).
The serverless MV pipeline uses the identity associated with the catalog/schema’s managed storage / metastore default storage, which may be different from my_credential.
So you’ve granted rights to my-access-connector (for the external location), but the identity actually used for __unitystorage/... is likely another access connector or managed identity that currently does not have rights on my-storage-account, hence the 403.
Can you find which credential is used for the managed location
You can do this by querying as shown below...
DESCRIBE CATALOG EXTENDED catalog; DESCRIBE SCHEMA EXTENDED catalog.schema;
You’re looking for the storage credential name (and thus the access connector / identity) that backs the managed location where __unitystorage/schemas/... lives. It may not be my_credential.
In the Databricks account console, open the storage credential you found in above step and note its access connector / managed identity.
In the Azure Portal... Go to storage account my-storage-account --> Access control (IAM). Add a role assignment as below..
After correcting storage permissions for the actual managed-location identity, rerun the below to see if it works
CREATE OR REPLACE MATERIALIZED VIEW catalog.schema.mView_test AS SELECT * FROM catalog.schema.table LIMIT 10;
Thursday
There are multiple requirements for materialized views. You can check below
You must use a Unity Catalog enabled pro or serverless SQL warehouse.
The owner (the user who creates the materialized view) must have the following permissions: