cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

What is the important / benefits of tracking artifacts in MLflow tracking?

Anonymous
Not applicable
 
1 ACCEPTED SOLUTION

Accepted Solutions

sean_owen
Databricks Employee
Databricks Employee

For me, the main benefit is that it is little or no work to enable. For example, when autologging is enabled for a library like sklearn or Pytorch, a lot of information about a model is captured with no additional steps. Further in Databricks, the tracking server receiving this information is also managed for you. Even where MLflow logging is done manually, it's relatively trivial to instrument existing ML code with those calls.

Tracking is useful for a few reasons. First it helps during experimentation, when one wants to compare the results of many runs, maybe from a hyperparameter sweep. It's useful to have a link to the exact revision of the code that produced a model rather than try to remember or write down just what bits of code were commented in/out during that best run.

It assists in reproducibility by capturing not just the model, but metadata like the version of libraries used, the version of data in Delta tables used in the model, the revision of the code, and who built the model and when.

The Model Registry builds on tracking to add workflow for testing and permissions to the production promotion process, which is important for integrity of a deployment of a production model.

Finally with that captured information, deployment becomes simpler. The resulting artifact can be retrieved as a Spark UDF, or turned on as a REST API.

View solution in original post

1 REPLY 1

sean_owen
Databricks Employee
Databricks Employee

For me, the main benefit is that it is little or no work to enable. For example, when autologging is enabled for a library like sklearn or Pytorch, a lot of information about a model is captured with no additional steps. Further in Databricks, the tracking server receiving this information is also managed for you. Even where MLflow logging is done manually, it's relatively trivial to instrument existing ML code with those calls.

Tracking is useful for a few reasons. First it helps during experimentation, when one wants to compare the results of many runs, maybe from a hyperparameter sweep. It's useful to have a link to the exact revision of the code that produced a model rather than try to remember or write down just what bits of code were commented in/out during that best run.

It assists in reproducibility by capturing not just the model, but metadata like the version of libraries used, the version of data in Delta tables used in the model, the revision of the code, and who built the model and when.

The Model Registry builds on tracking to add workflow for testing and permissions to the production promotion process, which is important for integrity of a deployment of a production model.

Finally with that captured information, deployment becomes simpler. The resulting artifact can be retrieved as a Spark UDF, or turned on as a REST API.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group