yesterday
Hey Databricks forum,
We are seeing a bit of issue in our azure databricks environment, from this sunday, that we are unable to list the files inside the containers. We have our unity catalogues and all configured in our external location, while we manage the ingestion pipelines to read from the staging containers and move them across our final enriched versions.
We are having issues with even listing files in the directory, seeing not authorised to perform issue.
ExecutionError: An error occurred while calling o534.ls. : Operation failed: "This request is not authorized to perform this operation.", 403,
We use unity catalogue, external location to manage access using managed identity. This managed identity has storage blob data contributor privilege and even elevated its privileges and when we test the connection everything seems checks out.
Please help us to identify where we are missing, as this being productions run and we are unable to figure out what's happening.
yesterday
Hey @databricks1111,
This usually happens when there’s a small disconnect between how your Unity Catalog external location is set up and the permissions that the managed identity actually has on the
storage account,even if everything looks right from the Azure side.
A few things you can check that typically help fix it:
1️.Check your external location setup:
Run this in a Databricks SQL cell:
DESCRIBE EXTERNAL LOCATION <your_external_location_name;
Make sure the external location is still pointing to the right container path and using the correct credential (the one tied to your managed identity).
Sometimes, if the credential or external location was updated or renamed, Unity Catalog might still reference the old mapping.
2️.Test if the credential really works:
Try loading a file directly from that container using a Unity Catalog–enabled cluster:
spark.read.format("parquet").load("abfss://<container>@<storageaccount>.dfs.core.windows.net")
If this throws the same `403 not authorized` error, then the issue is more on the Azure side — not Unity Catalog.
3️.Double-check Azure permissions:
Even though your managed identity has Storage Blob Data Contributor, make sure:
--> It’s assigned both at the storage account level and the container level.
--> No policies are overriding it (sometimes inherited deny assignments can block access).
--> The changes have fully propagated — Azure RBAC changes can take some time.
You can test the access from Azure CLI using:
az login --identity
az storage blob list --account-name <storageaccount> --container-name <container> --auth-mode login
4️.Review storage network/firewall settings:
If your storage account recently got a new network policy or firewall setting, Databricks might be getting blocked.
Make sure “Allow trusted Microsoft services to access this storage account” is turned on — that often resolves sudden access issues like this.
5️.If nothing else works:
Sometimes credentials tied to Unity Catalog can silently break after certain updates. If all looks fine but the issue still persists, try recreating the external location credential
and re-linking it. This refreshes the identity link and usually clears the problem.
yesterday
Hey Vasireddy,
Thank you for replying,
We are seeing this issue, only if the access is being done from shared compute, unlike a personal compute.
Would like to understand, what will be the difference in access configuration list that reaches the azure from data bricks from a access-credential-standpoint. Assuming the access-connector-data bricks will be the only one, that will reach azure, in access configuration.
We are pretty sure we haven't changed anything in our cluster configuration settings from the last 2-3months.
we have checked the external location being pointed to the right crednetial. it seems working fine, even if we test the connection and all.
We are able to list the blob, but we are unable to access the files inside it, if its a shared compute
and with personal the flows and everything worked fine.
We had recreated access connector, and external location connection recreation, we haven't tried that.
If cluster isn't the fault one. we can surely try that one out too.
Attaching the snapshot of cluster configuration
15 hours ago
Hey @databricks1111,
thanks for the extra details
The behavior you’re seeing (works fine on personal compute but fails on shared compute) usually comes down to which identity Databricks uses to access Azure Storage.
When you use personal compute, operations run under your user identity in Unity Catalog. But with shared compute, Databricks uses the workspace’s managed identity (via the access connector).
That difference can cause exactly what you’re seeing the shared compute can list blobs (since it can talk to the storage account) but fails to read the actual files, because the
managed identity tied to the shared compute doesn’t have the same Unity Catalog or storage-level permissions to access the data path.
Here’s what I’d suggest checking:
1. In Azure, make sure the access connector’s managed identity has Storage Blob Data Contributor on both the storage account and the specific container.
2. In Databricks, confirm that the external location’s credential uses this same managed identity.
3. Also verify that the shared compute cluster is Unity Catalog-enabled and has the right cluster access mode (Single User vs Shared Access).
If it’s in Shared mode, ensure users or service principals accessing the files have permission in Unity Catalog to read that external location.
11 hours ago
we did have to reconfigure access connector and external connections again, that resolved our issue. But its weird, why it would stop working when nothing in any of our environment changed.
Thank you for your reply to the issue.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now