Possible to use `params` argument of `mlflow.pyfunc.PythonModel` deployed to Databricks endpoint?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-11-2024 11:11 AM - edited 09-11-2024 11:22 AM
I'm deploying a custom model using the `mlflow.pyfunc.PythonModel` class as described here. My model uses the `params` argument in the `predict` method to allow the user to choose some aspects of the model at inference time. For example:
class CustomModelWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, model):
self.model = model
def predict(self, context, model_input, params):
confidence_level = params.get("confidence_level", 0.9)
return self.model.predict(model_input, confidence_level=confidence_level)
The model works fine when I load it from the MLflow model registry. Passing in params works as expected.
But it does not work when I deploy the model to an endpoint on Databricks. It seems that Databricks does not support the `params` argument. That's unfortunate because it seems like it would be useful/important for many use cases.
Does Databricks have any plans to support the `params` argument in model serving endpoints in the future? In the meantime, is there any reasonable workaround to allow the user to specify parameters at inference time?
- Labels:
-
Model Serving
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2025 07:30 PM
Is there any update on this feature?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2025 07:05 AM
No updates that I'm aware of.

