cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Problem loading a pyfunc model in job run

AmineM
New Contributor II

Hi, I'm currently working on a automated job to predict forecasts using a notebook than work just fine when I run it manually, but keep failling when schedueled, here is my code: 

import mlflow

# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(logged_model)
# Predict using the model
results_df = loaded_model.predict(predict_df)

# Define group_column and time_column
group_column = "id"  # Replace with your actual group column name
time_column = "week_date_format"    # Replace with your actual time column name
target_column = "sales_value"

# Display the prediction results with timestamp for each id
final_df = results_df.reset_index()[[group_column, time_column, "yhat"]].tail(
    forecast_horizon * predict_df[group_column].nunique()
)
final_df = final_df.rename(columns={'yhat': target_column})
display(final_df)

The other cells where mflow is installed and model dependecies are working fyi. 
PS: I use serverless job compute. 

1 ACCEPTED SOLUTION

Accepted Solutions

sarahbhord
Databricks Employee
Databricks Employee

Hey AmineM!

If your MLflow model loads fine in a Databricks notebook but fails in a scheduled job on serverless compute with an error like:
 
TypeError: code() argument 13 must be str, not int
 
the root cause is almost always a mismatch between the Python version (or dependencies like cloudpickle) used when the model was logged and the version used by your job cluster. This is especially common if you train your model on one Databricks Runtime (say, Python 3.8) and run your scheduled job on another (like Python 3.11), or across different serverless vs. interactive environments.

How to fix it: 

  • Make sure your scheduled job runs on a compute/runtime with the same Python and package versions as where you trained/logged the model.
  • If you can't control the job compute's environment (sometimes the case on serverless), re-log the model from a job running on that same compute type and use this new artifact for predictions.
  • Optionally, check your modelโ€™s requirements.txt/conda.yaml for dependency mismatches (especially cloudpickle).
  • Using Model Serving works because it auto-aligns dependencies, but it does cost more. Best practice for batch is to avoid it if possible.
This is a known serialization problemโ€”matching environments is the robust solution! I hope this is helpful.
 
Best,
 
Sarah

View solution in original post

3 REPLIES 3

AmineM
New Contributor II

and here is the error that I get: 

TypeError: code() argument 13 must be str, not int
File <command-32974490616971>, line 16
     13 logged_model = 'runs:/f715739d09624676b443cb02e7c98cc0/model'
     15 # Load model as a PyFuncModel.
---> 16 loaded_model = mlflow.pyfunc.load_model(logged_model)
     17 # Predict using the model
     18 results_df = loaded_model.predict(predict_df)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-222c73bc-a540-4c26-aa9b-af028baf9eca/lib/python3.11/site-packages/mlflow/pyfunc/model.py:659, in _load_context_model_and_signature(model_path, model_config)
    657         raise MlflowException("Python model path was not specified in the model configuration")
    658     with open(os.path.join(model_path, python_model_subpath), "rb") as f:
--> 659         python_model = cloudpickle.load(f)
    661 artifacts = {}
    662 for saved_artifact_name, saved_artifact_info in pyfunc_config.get(
    663     CONFIG_KEY_ARTIFACTS, {}
    664 ).items():

AmineM
New Contributor II

found a momentary solution : use a serving endpoint but it increase costs

sarahbhord
Databricks Employee
Databricks Employee

Hey AmineM!

If your MLflow model loads fine in a Databricks notebook but fails in a scheduled job on serverless compute with an error like:
 
TypeError: code() argument 13 must be str, not int
 
the root cause is almost always a mismatch between the Python version (or dependencies like cloudpickle) used when the model was logged and the version used by your job cluster. This is especially common if you train your model on one Databricks Runtime (say, Python 3.8) and run your scheduled job on another (like Python 3.11), or across different serverless vs. interactive environments.

How to fix it: 

  • Make sure your scheduled job runs on a compute/runtime with the same Python and package versions as where you trained/logged the model.
  • If you can't control the job compute's environment (sometimes the case on serverless), re-log the model from a job running on that same compute type and use this new artifact for predictions.
  • Optionally, check your modelโ€™s requirements.txt/conda.yaml for dependency mismatches (especially cloudpickle).
  • Using Model Serving works because it auto-aligns dependencies, but it does cost more. Best practice for batch is to avoid it if possible.
This is a known serialization problemโ€”matching environments is the robust solution! I hope this is helpful.
 
Best,
 
Sarah

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now