cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

How to use mlflow to log a composite estimator (multiple pipes) and then deploy it as rest endpoint

prafull
New Contributor

Hello,

I am trying to deploy a composite estimator as single model, by logging the run with mlflow and registering the model.

Can anyone help with how this can be done? This estimator contains different chains-

  1. text: data- tfidf- svm- svm.decision_function- text_dense matrix
  2. cat: data- encoding- scaling- cat_ matrix
  3. Light GBM- gets both concatenated (text_dense matrix,  cat_ matrix)

I am using different pipelines for this, as its not possible to create a single transformer/pipe. below is the model blueprint. I need to train and deploy model on a databricks serving enpoint

 

Screenshot 2024-01-17 000758.png

 

 

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @prafull , Deploying a composite estimator with MLflow involves several steps.

Let’s break it down:

  1. Logging the Run with MLflow:

    • First, you’ll need to train your composite estimator using the different pipelines you’ve mentioned (text and cat).
    • During training, use MLflow to log the run. This means capturing relevant information such as hyperparameters, metrics, and model artifacts.
    • You can log the run using mlflow.start_run() and then use mlflow.log_param(), mlflow.log_metric(), and mlflow.log_model() to record relevant details.
  2. Registering the Model:

    • Once the run is complete, you can register the model in MLflow. This step allows you to version and track your trained model.
    • Use mlflow.register_model() to register the composite estimator. Provide a unique name for the model and specify the path to the saved model artifacts.
  3. Deploying the Model:

    • To deploy the model on a Databricks serving endpoint, you have a few options:
      • Databricks REST API: You can use the Databricks REST API to create an inference endpoint for your registered model. This allows you to make predictions via HTTP requests.
      • MLflow Model Serving: MLflow provides a convenient way to serve models. You can use the following command to serve your composite estimator locally:
        mlflow models serve --model-uri runs:/<run-id>/multimodel --no-conda
        
        Replace <run-id> with the actual run ID of your logged model. This will deploy the model to an endpoint on your localhost, and you can interact with it by making ...1.
  4. Docker Container Image (Optional):

    • If you plan to deploy the model to a cloud platform, consider creating a Docker container image. This image can encapsulate your model and its dependencies.
    • You can use MLflow to create a Docker image suitable for deployment.

Remember to adapt the steps above to your specific use case, including configuring the serving environment and handling any additional requirements. Good luck with deploying your composite estimator! 🚀

 
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!