cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Migrated model to Unity catalog not seeing referenced serving endpoint

fbs342
New Contributor

There was a model which was migrated from workspace model registry to unity catalog. At the time of initial creation of that model, dependencies to other databricks serving endpoints were configured using "DatabricksServiceEndpoint" config in mlflow. Note that this model regsitered on workspace model registry was placed on a serving endpoint called serving_a. 

After migration to unity catalog, when I attempt to place the unity catalog model as model on the serving endpoint serving_a, I get the error that it cannot find dependent serving endpoints which were setup when the model was created on workspace model registry. 

Please how can i solve this

3 REPLIES 3

SP_6721
Honored Contributor

Hi @fbs342 ,

Try pointing MLflow to the Unity Catalog registry in your deploy code with mlflow.set_registry_uri("databricks-uc"), then update serving_a to reference the UC model before redeploying the endpoint. After that, move any dependencies you previously configured via DatabricksServiceEndpoint into serving endpoint environment variables, read those values in your model code, and redeploy serving_a

fbs342
New Contributor

The registry URI was already "databricks-uc" when workspace model registry was used. When I update the serving_a to reference UC model it works, its when I redeploy that the error I previously mentioned appears

 

iyashk-DB
Databricks Employee
Databricks Employee

Workspace model registry worked with workspace-scoped serving endpoints. UC models and UC serving endpoints use metastore-wide semantics and different lookup rules. The saved path inside the model metadata still points to workspace-level endpoints that no longer exist in UC context. So when you deploy the migrated UC model to the same serving endpoint (serving_a), Databricks Serving tries to rehydrate these dependencies and fails.

The fix would be to re-log the model with Databricks resource dependencies and re‑register in UC.
Verify that the dependent endpoints exist in the same workspace where you’re deploying and note their exact names. Ensure the endpoint creator has the 'Can Query' permission on each dependent endpoint. Re‑log the model using MLflow with MLflow resources pointing to each downstream endpoint, then register it to UC:

import mlflow
from mlflow.models.resources import DatabricksServingEndpoint

mlflow.set_registry_uri("databricks-uc") # ensure UC is the registry

resources = [
DatabricksServingEndpoint(endpoint_name="embedding_endpoint_name"),
DatabricksServingEndpoint(endpoint_name="reranker_endpoint_name"),
# ... add any other dependent endpoints
]

with mlflow.start_run():
logged = mlflow.pyfunc.log_model(
python_model="your_model.py", # or your flavor-specific log
artifact_path="model",
resources=resources # <-- critical for UC Serving
)

uc_model_name = "catalog.schema.model_name"
registered = mlflow.register_model(logged.model_uri, uc_model_name)

This ensures UC Serving will automatically provision short‑lived credentials for those endpoints and validate that they exist and are accessible.

Update the serving endpoint (serving_a) to serve the new UC model version. If the endpoint identity (the creator) doesn’t have the right UC or endpoint permissions, delete and recreate serving_a under a principal that does—endpoint identity cannot be changed post‑creation.

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now