11-08-2024 03:53 AM
Hi, I've encountered a problem of serving a langchain model I just created successfully on Databricks.
I was using the following code to set up a model in unity catalog:
11-08-2024 04:24 AM
I suspected the issue is coming from this small error I got: Got error: Must specify a chain Type in config. I used the
03-13-2025 05:33 PM
Hi, were you able to solve it? I'm having the same issue
04-07-2025 06:44 AM
Hi! Are there any news about this? I'm getting the same error 😕
04-23-2025 10:27 AM
Hi,
The warnings/errors in the logs of the langchain model log process can give you a good hint, although it may be not that evident at first sight.
It happened something similar to me - same error message, and the cause was having used an OpenAI model that I mistakenly have passed to the langchain model as being a Databricks one.
yesterday
Greetings @hawa , Thanks for sharing the details—this looks like a combination of registration and configuration issues that commonly surface with the MLflow LangChain flavor on Databricks.
<catalog>.<schema>.<model>. Using just "model1" causes registration/serving mismatches and can lead to “not successfully registered” errors when serving from UC.model_config={"chain_type": "stuff"} (or whatever you used) when calling mlflow.langchain.log_model(...) so the MLflow artifact contains the chain’s type for serving.mlflow.models.predict) to ensure the runtime and signature behave as expected.from mlflow.models import infer_signature
import mlflow
import langchain
# 1) Use a full UC name
CATALOG = "prod"
SCHEMA = "ai_apps"
MODEL_BASENAME = "model1"
REGISTERED_MODEL_NAME = f"{CATALOG}.{SCHEMA}.{MODEL_BASENAME}"
mlflow.set_registry_uri("databricks-uc")
# Assume you already built `chain` (with your chain_type="stuff") and have a loader_fn (e.g., get_retriver)
question = {"query": "Hello"} # keep your input schema consistent with how the chain expects inputs
answer = chain.invoke(question)
signature = infer_signature(question, answer)
with mlflow.start_run(run_name="clippy_rag") as run:
model_info = mlflow.langchain.log_model(
chain,
loader_fn=get_retriver, # your retriever factory
artifact_path="chain",
registered_model_name=REGISTERED_MODEL_NAME,
# 2) Persist chain type so serving can reconstruct it
model_config={"chain_type": "stuff"},
# Pin requirements needed at serve time
pip_requirements=[
f"mlflow=={mlflow.__version__}",
f"langchain=={langchain.__version__}",
"databricks-vectorsearch",
],
# 3) Keep non-DataFrame example intact for proper signature inference
input_example=question,
example_no_conversion=True,
signature=signature,
)
# Optional: quick pre-deployment validation
loaded = mlflow.langchain.load_model(model_info.model_uri)
_ = loaded.invoke(question) # should run without errors
model_config={"chain_type": "stuff"} writes the chain type into the MLflow LangChain flavor’s config (steps YAML), satisfying LangChain’s loader which otherwise throws “Must specify a chain Type in config.”pip_requirements, extra_pip_requirements, or conda_env) to ensure the serving container matches your training env.loader_fn is fine, but resources help with auth passthrough in production.Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now