Hi, I've encountered a problem of serving a langchain model I just created successfully on Databricks.
I was using the following code to set up a model in unity catalog:
from mlflow.models import infer_signature
import mlflow
import langchain
mlflow.set_registry_uri("databricks-uc")
model_name = "model1"
with mlflow.start_run(run_name="clippy_rag") as run:
signature = infer_signature(question, answer)
model_info = mlflow.langchain.log_model(
chain,
loader_fn=get_retriver,
artifact_path="chain",
registered_model_name=model_name,
pip_requirements=[
"mlflow==" + mlflow.__version__,
"langchain==" + langchain.__version__,
"databricks-vectorsearch",
],
signature=signature,
)
The UI shows that the model is ready but when I severed this model it showed Model with name 'model1' and version '1' is not successfully registered. Ensure model version has finished registration before use in model serving. Do you know what's the issue here?