- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2021 07:41 AM
Hi all, thank you for taking the time to attend to my post. A background to preface, my team and I have been prototyping an ML model that we would like to push into the production and deployment phase. We have been prototyping on Jupyter Notebooks but are trying to figure out what tools and platforms we may require.
I'm by no means an expert in data architecture and was hoping someone could shed some light. In the diagram attached, is my understanding of it thus far (implying that something like Databricks and Sagemaker is an all in one platform) However, I'm not sure if they have functionalities like:
- version control
- parameterization of the notebooks
- templatizing notebooks
Hope to hear back! And participate in an active discussion! Thank you so much!
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2021 09:57 AM
For production model serving, why not just use MLflow Model Serving? You just code it up/import it with the notebooks, then Log it using MLflow, then Register it with the MLflow Registry, then there is a nice UI to serve it using Model Serving. It will expose a REST endpoint for your model that any application can hit.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2021 09:42 AM
It is hard to tell what you need. Are you gonna use spark?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-08-2021 09:57 AM
For production model serving, why not just use MLflow Model Serving? You just code it up/import it with the notebooks, then Log it using MLflow, then Register it with the MLflow Registry, then there is a nice UI to serve it using Model Serving. It will expose a REST endpoint for your model that any application can hit.

