cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Materialized view creation fails

PNC
Databricks Partner

Hi,

I have ran into a problem when creating materialized view.

Here's my simple query I'm trying to run:

%sql
create or replace materialized view catalog.schema.mView_test
as select * from catalog.schema.table limit 10;

I'm getting following error:

Encountered an error with Unity Catalog while setting up the pipeline on cluster xxxx-xxxxxx-xxxxxxxx-xxx. 
Ensure that your Unity Catalog configuration is correct, and that required resources (e.g., catalog, schema) exist and are accessible. 
Also verify that the cluster has appropriate permissions to access Unity Catalog.

Details: Operation failed: "This request is not authorized to perform this operation.", 403, GET, https://storageaccount.dfs.core.windows.net/container?upn=false&beginFrom=0000000000000000000&resource=filesystem&maxResults=5000&directory=catalog/schema/__unitystorage/schemas/9e94de07-1f9d-4798-b250-d34f6f2b769d/tables/b34a78c7-fbcc-4265-a331-da4372e59afc/_delta_log&timeout=90&recursive=false&st=2026-04-16T07:00:09Z&sv=2020-02-10&ske=2026-04-16T09:00:09Z&sig=XXXXX&sktid=2fb08174-a150-479d-8d15-2174da71a11a&se=2026-04-16T08:17:22Z&sdd=7&skoid=1456c2e6-8869-41a4XXXXXXXXXXXXXXXXXX&spr=https&sks=b&skt=2026-04-16T07:00:09Z&sp=racwdxlm&skv=2025-01-05&sr=d, AuthorizationFailure, , "This request is not authorized to perform this operation. RequestId:1b89a867-b01f-0056-1b71-cdf6f8000000 Time:2026-04-16T07:17:26.4365052Z"

I'm running the query on our own SQL Warehouse, not serverless SQL warehouse. 

I have made sure the following:

  1. I have permissions to catalog and schema
  2. Browsing external location works
  3. Access connector that the storage credential is mapped to has Storage Blob Data Contributor credentials in Storage Account

My suspicion is that the culprit here is that the materialized views are backed by serverless pipeline even I'm not using serverless compute to run my notebook. Could this be the issue here? If so, how do I fix this?

4 REPLIES 4

Ashwin_DSA
Databricks Employee
Databricks Employee

Hi @PNC,

I don't think it has to do with the serverless compute to run the notebook. I'm just wondering if it's related to your access to the underlying storage.

Can you try the below.

In Catalog Explorer, open catalog → schema → check the Storage / managed location section and note which storage credential is attached.

In the Databricks account console, open that storage credential and note which access connector / managed identity / service principal it uses.

In the Azure Portal for storageaccount:

  • Under Access control (IAM), confirm that this exact identity has Storage Blob Data Contributor (or Owner) scoped to the storage account or at least the container that holds catalog/schema/__unitystorage/....
  • If you only granted Blob Data Contributor to an access connector used for a different external location, that won’t help this MV backing location.

Also, can you confirm you can read the base table from the same cluster/warehouse?

Just run something like the below.

SELECT COUNT(*) FROM catalog.schema.table;
If this fails with a UC permission error, fix catalog/schema/table grants first.

You may also want to check and ensure compute is UC-compatible (shared cluster or SQL warehouse. Not legacy/no-isolation single-user only). If other UC tables in this catalog work from this compute, you’re probably fine.

If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.

 

 

Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***

PNC
Databricks Partner
Schema's storage location is something like this:

 

abfss://my-container@my-storage-account.dfs.core.windows.net/catalog/schema/__unitystorage/schemas/xxx-xxx-xxx-xxx-xxx

 

I have external location called "container_catalog" for URL abfss://my-container@my-storage-account.dfs.core.windows.net/catalog

 

Storage Credential for this location is called "my_credential" and it's connector id is /subscriptions/xxx-xxx-xxx-xxxx-xxx/resourceGroups/my-resource-group/providers/Microsoft.Databricks/accessConnectors/my-access-connector

 

Now when I go Azure portal and navigate to storage account "my-storage-account" and open up IAM, I can see that my-access-connector has Storage Blob Data Contributor role assigned to it (scoped to storage account).

 

When I run 
SELECT COUNT(*) FROM catalog.schema.table;
I get the row count as expected.

Ashwin_DSA
Databricks Employee
Databricks Employee

Hi @PNC,

Thanks for checking... 

I think your setup is very close. The missing piece is which identity is actually used for the MV backing storage, which is not necessarily the same as the one behind your external location.

Because you’re already seeing a 403 from ADLS for the __unitystorage path, the serverless MV pipeline is actually starting, which is good. The failure is now purely an Azure Storage authorisation problem, not a serverless problem.

SELECT COUNT(*) FROM catalog.schema.table reads from your external location abfss://my-container@my-storage-account.dfs.core.windows.net/catalog using storage credential my_credential (backed by my-access-connector), which has Blob Data Contributor. So, it works.

Your create MV query writes MV data under the schema’s managed location (.../catalog/schema/__unitystorage/schemas/...).

The serverless MV pipeline uses the identity associated with the catalog/schema’s managed storage / metastore default storage, which may be different from my_credential.

So you’ve granted rights to my-access-connector (for the external location), but the identity actually used for __unitystorage/... is likely another access connector or managed identity that currently does not have rights on my-storage-account, hence the 403.

Can you find which credential is used for the managed location

You can do this by querying as shown below...

 

DESCRIBE CATALOG EXTENDED catalog;
DESCRIBE SCHEMA EXTENDED catalog.schema;
or in Catalog Explorer by opening catalog --> Storage / Managed storage. Go to the Schema and check its Managed location and Storage credential (if shown).

You’re looking for the storage credential name (and thus the access connector / identity) that backs the managed location where __unitystorage/schemas/... lives. It may not be my_credential.

In the Databricks account console, open the storage credential you found in above step and note its access connector / managed identity.

In the Azure Portal... Go to storage account my-storage-account --> Access control (IAM). Add a role assignment as below..

  • Role: Storage Blob Data Contributor (or Owner)
  • Scope: the storage account (or at least my-container)
  • Principal: the identity from that storage credential (not just my-access-connector if they differ).

After correcting storage permissions for the actual managed-location identity, rerun the below to see if it works

CREATE OR REPLACE MATERIALIZED VIEW catalog.schema.mView_test
AS SELECT * FROM catalog.schema.table LIMIT 10;
If a DESCRIBE ... EXTENDED shows that the managed location is using some internal/default credential (not visible as my_credential), you’ll need to give that identity Blob Data Contributor on my-storage-account as well or move the schema’s managed storage to a credential you control.
 
If this answer resolves your question, could you mark it as “Accept as Solution”? That helps other users quickly find the correct fix.
Regards,
Ashwin | Delivery Solution Architect @ Databricks
Helping you build and scale the Data Intelligence Platform.
***Opinions are my own***

balajij8
Contributor

There are multiple requirements for materialized views. You can check below

  • You must use a Unity Catalog enabled pro or serverless SQL warehouse.

  • To incrementally refresh a materialized view from Delta tables, the source tables must have row tracking enabled.
  • The owner (the user who creates the materialized view) must have the following permissions:

    • SELECT privilege on the base tables referenced by the materialized view.
    • USE CATALOG and USE SCHEMA privileges on the catalog and schema containing the source tables for the materialized view.
    • USE CATALOG and USE SCHEMA privileges on the target catalog and schema for the materialized view.
    • CREATE TABLE and CREATE MATERIALIZED VIEW privileges on the schema containing the materialized view.
  • Workspace must be in a region that supports serverless SQL warehouses.