Problem serving a langchain model on Databricks
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2024 03:53 AM
Hi, I've encountered a problem of serving a langchain model I just created successfully on Databricks.
I was using the following code to set up a model in unity catalog:
from mlflow.models import infer_signature
import mlflow
import langchain
mlflow.set_registry_uri("databricks-uc")
model_name = "model1"
with mlflow.start_run(run_name="clippy_rag") as run:
signature = infer_signature(question, answer)
model_info = mlflow.langchain.log_model(
chain,
loader_fn=get_retriver,
artifact_path="chain",
registered_model_name=model_name,
pip_requirements=[
"mlflow==" + mlflow.__version__,
"langchain==" + langchain.__version__,
"databricks-vectorsearch",
],
signature=signature,
)
The UI shows that the model is ready but when I severed this model it showed Model with name 'model1' and version '1' is not successfully registered. Ensure model version has finished registration before use in model serving. Do you know what's the issue here?
Labels:
- Labels:
-
LLMs
-
Model Serving
1 REPLY 1
Options
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2024 04:24 AM
I suspected the issue is coming from this small error I got: Got error: Must specify a chain Type in config. I used the
chain_type="stuff" when building the langchain but I'm not sure how to fix it.

