cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

What is the important / benefits of tracking artifacts in MLflow tracking?

Anonymous
Not applicable
 
1 ACCEPTED SOLUTION

Accepted Solutions

sean_owen
Honored Contributor II
Honored Contributor II

For me, the main benefit is that it is little or no work to enable. For example, when autologging is enabled for a library like sklearn or Pytorch, a lot of information about a model is captured with no additional steps. Further in Databricks, the tracking server receiving this information is also managed for you. Even where MLflow logging is done manually, it's relatively trivial to instrument existing ML code with those calls.

Tracking is useful for a few reasons. First it helps during experimentation, when one wants to compare the results of many runs, maybe from a hyperparameter sweep. It's useful to have a link to the exact revision of the code that produced a model rather than try to remember or write down just what bits of code were commented in/out during that best run.

It assists in reproducibility by capturing not just the model, but metadata like the version of libraries used, the version of data in Delta tables used in the model, the revision of the code, and who built the model and when.

The Model Registry builds on tracking to add workflow for testing and permissions to the production promotion process, which is important for integrity of a deployment of a production model.

Finally with that captured information, deployment becomes simpler. The resulting artifact can be retrieved as a Spark UDF, or turned on as a REST API.

View solution in original post

1 REPLY 1

sean_owen
Honored Contributor II
Honored Contributor II

For me, the main benefit is that it is little or no work to enable. For example, when autologging is enabled for a library like sklearn or Pytorch, a lot of information about a model is captured with no additional steps. Further in Databricks, the tracking server receiving this information is also managed for you. Even where MLflow logging is done manually, it's relatively trivial to instrument existing ML code with those calls.

Tracking is useful for a few reasons. First it helps during experimentation, when one wants to compare the results of many runs, maybe from a hyperparameter sweep. It's useful to have a link to the exact revision of the code that produced a model rather than try to remember or write down just what bits of code were commented in/out during that best run.

It assists in reproducibility by capturing not just the model, but metadata like the version of libraries used, the version of data in Delta tables used in the model, the revision of the code, and who built the model and when.

The Model Registry builds on tracking to add workflow for testing and permissions to the production promotion process, which is important for integrity of a deployment of a production model.

Finally with that captured information, deployment becomes simpler. The resulting artifact can be retrieved as a Spark UDF, or turned on as a REST API.

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.