Workspace model registry worked with workspace-scoped serving endpoints. UC models and UC serving endpoints use metastore-wide semantics and different lookup rules. The saved path inside the model metadata still points to workspace-level endpoints that no longer exist in UC context. So when you deploy the migrated UC model to the same serving endpoint (serving_a), Databricks Serving tries to rehydrate these dependencies and fails.
The fix would be to re-log the model with Databricks resource dependencies and reโregister in UC.
Verify that the dependent endpoints exist in the same workspace where youโre deploying and note their exact names. Ensure the endpoint creator has the 'Can Query' permission on each dependent endpoint. Reโlog the model using MLflow with MLflow resources pointing to each downstream endpoint, then register it to UC:
import mlflow
from mlflow.models.resources import DatabricksServingEndpoint
mlflow.set_registry_uri("databricks-uc") # ensure UC is the registry
resources = [
DatabricksServingEndpoint(endpoint_name="embedding_endpoint_name"),
DatabricksServingEndpoint(endpoint_name="reranker_endpoint_name"),
# ... add any other dependent endpoints
]
with mlflow.start_run():
logged = mlflow.pyfunc.log_model(
python_model="your_model.py", # or your flavor-specific log
artifact_path="model",
resources=resources # <-- critical for UC Serving
)
uc_model_name = "catalog.schema.model_name"
registered = mlflow.register_model(logged.model_uri, uc_model_name)
This ensures UC Serving will automatically provision shortโlived credentials for those endpoints and validate that they exist and are accessible.
Update the serving endpoint (serving_a) to serve the new UC model version. If the endpoint identity (the creator) doesnโt have the right UC or endpoint permissions, delete and recreate serving_a under a principal that doesโendpoint identity cannot be changed postโcreation.