11-27-2025 05:21 AM
Hi everyone,
I’m running into two blocking issues while trying to run a Delta Live Tables (DLT) pipeline on Databricks (Azure). I’m hoping someone can help me understand what’s going wrong.
1. Unity Catalog cannot access underlying ADLS storage
Every DLT pipeline run fails with:
UNITY_CATALOG_INITIALIZATION_FAILED
INVALID_STATE.UC_CLOUD_STORAGE_ACCESS_FAILURE
AbfsRestOperationException
Even though:
All containers show correct ACLs (rwx), and IAM roles look correct at the storage account level.
But the pipeline still cannot initialize UC or access the storage.
2. VM size / SKU not available for DLT job compute
When the DLT pipeline tries to start a job cluster, I get:
The VM size you are specifying is not available (SkuNotAvailable)
QuotaExceeded: Required cores exceed available limit
Even small SKUs fail:
Azure CLI shows that many F-series SKUs exist in UK South, but in Databricks they fail to provision or don’t appear in the dropdown.
This makes it impossible to run even a minimal DLT cluster with 1 worker.
What I’m trying to understand
Any guidance would be hugely appreciated.
I’ve already:
Still getting the same two errors.
Thanks in advance to anyone who can help!
11-28-2025 04:11 AM
Short Answer:
The UC error is almost always caused by the wrong identity being used in the Storage Credential / External Location, even if IAM + ACLs look correct.
The VM failures are typically quota + regional capacity issues in UK South, especially for older families like DSv2/F-series.
Fixes:
Re-check the Storage Credential → External Location chain
Increase quotas for a modern VM family
Or try a newer region with better capacity
I will try to write step by step.
11-28-2025 04:13 AM
UC Cloud Storage Access Failure (UC_CLOUD_STORAGE_ACCESS_FAILURE)
Even if IAM + ACLs look correct, Unity Catalog will not use ACLs directly. UC always accesses ADLS through a Storage Credential → External Location → Catalog chain.
A few things to verify:
Run:
SHOW EXTERNAL LOCATIONS;
DESCRIBE EXTERNAL LOCATION <your_location>;
The credential listed here must map to the Access Connector managed identity, not your user.
If the location shows a different storage credential, UC will try to access ADLS with the wrong identity → AbfsRestOperationException.
DESCRIBE CATALOG <your_catalog>;
The storage_location must itself be an external location with a valid credential.
If the metastore root was created before UC, it might not be correctly attached.
Enable "Blob Read/Write/Error" diagnostics on the storage account.
You will likely see failed requests from a principal different from the Access Connector MI — that’s the real smoking gun.
Unity Catalog requires:
Access Connector MI → StorageCredential assignment
StorageCredential → External Location
External Location → Catalog/Schema/Table
ACLs alone won’t fix UC initialisation.
11-28-2025 04:14 AM
DLT pipelines always spin up job compute, and Azure is strict about SKU availability per region & per subscription.
Most common causes
Quota for that VM family is set to 2 vCPUs
Databricks shows:
“Estimated available: 2”
“QuotaExceeded”
The SKU exists in Azure CLI but Azure has no capacity for it in UK South
This is very common for older families like DS_v2 and F-series.
The Pipeline UI hides Advanced Options
This normally happens when Databricks can’t find any valid SKUs for job compute under your subscription constraints.
What to check
In Azure Portal → Subscription → Usage + quotas
Filter by:
Region: UK South
VM family: Dsv2, F-series, Dv3, Dv5, etc.
You will typically see vCPU limits like “2/2 used”.
Request a quota increase for at least:
Standard Dv3 Family vCPUs
Standard Dv5 Family vCPUs
(These have much better regional availability.)
Alternatively try a workspace in UK South 2 or North Europe, where clusters often provision successfully.