cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Facing issues with passing memory checkpointer in lanngraph agents

kishan_
New Contributor II

Hi,

I am trying to create a simple langgraph agent in Databricks, the agent also uses lanngraph memory checkpoint which enables to store the state of the graph. This is working fine when I am trying it in Databricks notebook, but when I tried to log this as Mlfow model, I couldn't find any documentation on where to pass the memory checkpoint

I want to know if there is way we can pass memory checkpoint when logging langgraph agent in Mlflow

1 ACCEPTED SOLUTION

Accepted Solutions

morenoj11
New Contributor III

I saw that you can compile the model without checkpointer, register it in MLflow, and then, after loading, assign it after compilation.

```

import mlflow

 

mlflow.models.set_model(build_graph())
with mlflow.start_run() as run_id:
model_info = mlflow.langchain.log_model(
lc_model="build_graph.py", # Path to our model Python file
artifact_path="langgraph",
)
model_uri = model_info.model_uri
[...]
 
loaded_model = mlflow.langchain.load_model(model_uri)
loaded_model.checkpointer = checkpointer
 
loaded_model.invoke(input_state, config)  ## config has the thread_id

```

It's not elegant or future-proof, but it might do the trick while we wait for a better solution.

View solution in original post

4 REPLIES 4

morenoj11
New Contributor III

Facing the same issue here.

morenoj11
New Contributor III

I saw that you can compile the model without checkpointer, register it in MLflow, and then, after loading, assign it after compilation.

```

import mlflow

 

mlflow.models.set_model(build_graph())
with mlflow.start_run() as run_id:
model_info = mlflow.langchain.log_model(
lc_model="build_graph.py", # Path to our model Python file
artifact_path="langgraph",
)
model_uri = model_info.model_uri
[...]
 
loaded_model = mlflow.langchain.load_model(model_uri)
loaded_model.checkpointer = checkpointer
 
loaded_model.invoke(input_state, config)  ## config has the thread_id

```

It's not elegant or future-proof, but it might do the trick while we wait for a better solution.

kishan_
New Contributor II

@morenoj11 The solution which you have mentioned, Have you tried to deploy the same in Databricks model serving ?

sebascardonal
New Contributor II

Hi all. I have the same issue, could you deploy the graph with the MemorySaver?

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now