cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

How far does model size and lag impact distributed inference ?

anvil
New Contributor II

Hello !

I was wondering how impactful a model's size of inference lag was in a distributed manner.

With tools like Pandas Iterator UDFs or mlflow.pyfunc.spark_udf() we can make it so models are loaded only once per worker, so I would tend to say that minimizing inference lag is more important than minimizing size, since size will impact us once per model whereas lag will impact us once per observation.

I would also say that the impact is even greater with ensemble models where several models - with their own lag - each need to infer once per observation.

Is this assumption correct ?

Thank you !

1 REPLY 1

youssefmrini
Honored Contributor III
Honored Contributor III

Your assumption that minimizing inference lag is more important than minimizing the size of the model in a distributed setting is generally correct.

In a distributed environment, models are typically loaded once per worker, as you mentioned, which means that the impact of model size is limited to the initial loading of the model. However, inference lag occurs every time an observation is processed, which can have a significant impact on the overall performance of the system.

Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!