I'm deploying a custom model using the `mlflow.pyfunc.PythonModel` class as described here. My model uses the `params` argument in the `predict` method to allow the user to choose some aspects of the model at inference time. For example:
class CustomModelWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, model):
self.model = model
def predict(self, context, model_input, params):
confidence_level = params.get("confidence_level", 0.9)
return self.model.predict(model_input, confidence_level=confidence_level)
The model works fine when I load it from the MLflow model registry. Passing in params works as expected.
But it does not work when I deploy the model to an endpoint on Databricks. It seems that Databricks does not support the `params` argument. That's unfortunate because it seems like it would be useful/important for many use cases.
Does Databricks have any plans to support the `params` argument in model serving endpoints in the future? In the meantime, is there any reasonable workaround to allow the user to specify parameters at inference time?