cancel
Showing results for 
Search instead for 
Did you mean: 
Administration & Architecture
Explore discussions on Databricks administration, deployment strategies, and architectural best practices. Connect with administrators and architects to optimize your Databricks environment for performance, scalability, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 

Model serving with provisioned throughput fails

PNC
New Contributor II

I'm trying to serve a model with provisioned throughput but I'm getting this error:

Build could not start due to an internal error. If you are serving a model from UC and Azure storage firewall or Private Link is configured on your storage account, please verify your network connectivity is configured properly according to https://learn.microsoft.com/en-us/azure/databricks/security/network/serverless-network-security/serv.... For additional assistance, contact Databricks Support.

Strange thing is that I had the same issue previously when trying to serve model with custom compute. Then I created private endpoint as instructed and it started working. Now I'm creating serving endpoint for a model in the same storage account and in the same workspace. Private link is already created. Only difference is that this model uses provisioned throughput and the other was using custom compute. Could that cause this error and if yes, how do I fix it? 

2 REPLIES 2

Louis_Frolio
Databricks Employee
Databricks Employee

Hey @PNC , 

Thanks for sharing the error and the surrounding context—this one is a very common networking and Private Link gotcha when serving models on serverless. You’re definitely not alone here.

Here’s what’s really going on under the hood.

Why provisioned throughput can trigger this

Provisioned throughput endpoints run on serverless and must pull model artifacts from Unity Catalog over Azure Storage using the Blob endpoint. That path specifically requires a Private Endpoint for the storage account’s blob subresource. The DFS subresource is only needed for logging models from serverless notebooks—not for serving.

Private connectivity is supported for provisioned throughput and custom model serving, but external models are a different story and aren’t typically supported by default.

In practice, this means PT builds will fail if your NCC or Private Link setup does not include an approved, established blob Private Endpoint to the exact storage account where your UC model artifacts live—even if other serverless workloads have worked fine in the past.

What’s different from your earlier custom model success

Both custom model endpoints and provisioned throughput use serverless networking and respect workspace ingress controls like IP ACLs and Private Link. The key difference is that the serving build process itself explicitly pulls artifacts via the Blob endpoint. So if you only have a DFS Private Endpoint—which is very common because it’s required for UC logging from notebooks—that alone is not sufficient for serving builds.

Provisioned throughput also allocates dedicated inference capacity, but it still relies on the same serverless connectivity patterns. The dependency on Blob access does not change.

Fastest fix checklist

First, confirm that your NCC is actually attached to the workspace. After attaching, give it about 10 minutes and then restart all serverless services—endpoints, serverless notebooks, the works.

Next, add or verify Private Endpoint rules on the Unity Catalog storage account:

You must have one rule pointing at the storage account resource ID with the Azure subresource set to blob. This is mandatory for model artifact downloads during serving.

DFS is optional for serving and only required for serverless notebook logging.

Make sure every rule shows a status of ESTABLISHED, meaning it’s approved on the Azure side.

If instead you’re using the firewall approach rather than Private Link, make sure you followed the NCC firewall enablement docs exactly and allowed the stable serverless subnet IDs for in-region storage. If not, Private Link is the cleaner and more reliable path.

After making any NCC or PE changes, restart your serverless resources again and retry the build.

Before retrying, it’s worth doing a quick connectivity test from serverless:

From a serverless notebook, confirm that your storage FQDNs resolve to private IPs for the Blob endpoint.

Then curl a small test object using a SAS token. If name resolution and GET both work, your Private Link or firewall path is solid.

If the build still fails after all that, download the serving build logs and look specifically for AuthorizationFailure or storage access errors. When you see those, it almost always points back to either a missing blob PE or a rule that never moved to ESTABLISHED.

TL;DR

Yes—provisioned throughput will absolutely surface this when the workspace NCC does not include an approved, established blob Private Endpoint to the storage account that hosts your UC model artifacts. Add or verify the blob PE (DFS only if you need notebook logging), attach the NCC, approve everything so it reads ESTABLISHED, restart all serverless resources, and retry.

 

Hope this helps, Louis.

iyashk-DB
Databricks Employee
Databricks Employee

Hi team,

Creating an endpoint in your workspace needs Serverless, and so you need to update the storage account’s firewall to allow Databricks serverless compute via your workspace’s Network Connectivity Configuration (NCC).  If the storage account firewall is enabled and serverless subnets aren’t allowed, the endpoint startup fails with this error.

Ref Doc to set up the NCC rules for serverless - https://docs.databricks.com/aws/en/security/network/serverless-network-security/serverless-firewall