cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Problem serving a langchain model on Databricks

hawa
New Contributor II

Hi, I've encountered a problem of serving a langchain model I just created successfully on Databricks.

I was using the following code to set up a model in unity catalog:

from mlflow.models import infer_signature
import mlflow
import langchain

mlflow.set_registry_uri("databricks-uc")
model_name = "model1"

with mlflow.start_run(run_name="clippy_rag") as run:
    signature = infer_signature(question, answer)
    model_info = mlflow.langchain.log_model(
        chain,
        loader_fn=get_retriver,
        artifact_path="chain",
        registered_model_name=model_name,
        pip_requirements=[
            "mlflow==" + mlflow.__version__,
            "langchain==" + langchain.__version__,
            "databricks-vectorsearch",
        ],
        signature=signature,
    )
 
The UI shows that the model is ready but when I severed this model it showed Model with name 'model1' and version '1' is not successfully registered. Ensure model version has finished registration before use in model serving. Do you know what's the issue here?
5 REPLIES 5

hawa
New Contributor II

I suspected the issue is coming from this small error I got: Got error: Must specify a chain Type in config. I used the 

chain_type="stuff" when building the langchain but I'm not sure how to fix it.

rkmee
New Contributor II

Hi, were you able to solve it? I'm having the same issue

constanmrtnz
New Contributor II

Hi! Are there any news about this? I'm getting the same error 😕

Octavian1
Contributor

Hi,

The warnings/errors in the logs of the langchain model log process can give you a good hint, although it may be not that evident at first sight.

It happened something similar to me - same error message, and the cause was having used an OpenAI model that I mistakenly have passed to the langchain model as being a Databricks one.

Louis_Frolio
Databricks Employee
Databricks Employee

Greetings @hawa ,  Thanks for sharing the details—this looks like a combination of registration and configuration issues that commonly surface with the MLflow LangChain flavor on Databricks.

What’s going wrong

  • The registered model name should be a full three-level Unity Catalog path like <catalog>.<schema>.<model>. Using just "model1" causes registration/serving mismatches and can lead to “not successfully registered” errors when serving from UC.
  • The LangChain flavor needs chain type info in the logged model’s config so it can reconstruct the chain at load/serve time. Without it, you get “Must specify a chain Type in config.” The fix is to pass model_config={"chain_type": "stuff"} (or whatever you used) when calling mlflow.langchain.log_model(...) so the MLflow artifact contains the chain’s type for serving.
  • It’s best to validate the model before serving by loading the model back and invoking it (or using mlflow.models.predict) to ensure the runtime and signature behave as expected.

Fix:

log and register correctly, then validate Below is a minimal pattern that addresses all three points.
from mlflow.models import infer_signature
import mlflow
import langchain

# 1) Use a full UC name
CATALOG = "prod"
SCHEMA = "ai_apps"
MODEL_BASENAME = "model1"
REGISTERED_MODEL_NAME = f"{CATALOG}.{SCHEMA}.{MODEL_BASENAME}"

mlflow.set_registry_uri("databricks-uc")

# Assume you already built `chain` (with your chain_type="stuff") and have a loader_fn (e.g., get_retriver)
question = {"query": "Hello"}  # keep your input schema consistent with how the chain expects inputs
answer = chain.invoke(question)
signature = infer_signature(question, answer)

with mlflow.start_run(run_name="clippy_rag") as run:
    model_info = mlflow.langchain.log_model(
        chain,
        loader_fn=get_retriver,                     # your retriever factory
        artifact_path="chain",
        registered_model_name=REGISTERED_MODEL_NAME,
        # 2) Persist chain type so serving can reconstruct it
        model_config={"chain_type": "stuff"},
        # Pin requirements needed at serve time
        pip_requirements=[
            f"mlflow=={mlflow.__version__}",
            f"langchain=={langchain.__version__}",
            "databricks-vectorsearch",
        ],
        # 3) Keep non-DataFrame example intact for proper signature inference
        input_example=question,
        example_no_conversion=True,
        signature=signature,
    )

# Optional: quick pre-deployment validation
loaded = mlflow.langchain.load_model(model_info.model_uri)
_ = loaded.invoke(question)  # should run without errors

Why this works

  • The full Unity Catalog path ensures the version is created under UC and can be targeted by Model Serving without cross-registry confusion.
  • Providing model_config={"chain_type": "stuff"} writes the chain type into the MLflow LangChain flavor’s config (steps YAML), satisfying LangChain’s loader which otherwise throws “Must specify a chain Type in config.”
  • Doing a quick self-load/invoke avoids surprises at serving time and aligns with Databricks’ guidance to validate models pre-deployment.

Then serve it

You can now create a custom model serving endpoint from the UI (Serving > Create endpoint), selecting your UC model by its full name and version. The endpoint should transition to READY once the container image is built and the model is loaded.
 

Extra tips

  • If your endpoint shows “Not Ready” for an extended period, confirm the model version status in UC (READY vs. PENDING) and that the endpoint creator’s identity has UC access to the catalog/schema/model. If permissions are wrong for the creator, delete and recreate under a principal with correct UC privileges.
  • When logging nonstandard dependencies (private wheels or pinned versions), prefer logging them with the model (via pip_requirements, extra_pip_requirements, or conda_env) to ensure the serving container matches your training env.
  • If you want Databricks-managed authentication to resources (Vector Search, foundation model endpoints), consider the resources mechanism described in the agent logging docs; for simple retrievers your loader_fn is fine, but resources help with auth passthrough in production.
 
Cheers, Louis.