For me, the main benefit is that it is little or no work to enable. For example, when autologging is enabled for a library like sklearn or Pytorch, a lot of information about a model is captured with no additional steps. Further in Databricks, the tracking server receiving this information is also managed for you. Even where MLflow logging is done manually, it's relatively trivial to instrument existing ML code with those calls.
Tracking is useful for a few reasons. First it helps during experimentation, when one wants to compare the results of many runs, maybe from a hyperparameter sweep. It's useful to have a link to the exact revision of the code that produced a model rather than try to remember or write down just what bits of code were commented in/out during that best run.
It assists in reproducibility by capturing not just the model, but metadata like the version of libraries used, the version of data in Delta tables used in the model, the revision of the code, and who built the model and when.
The Model Registry builds on tracking to add workflow for testing and permissions to the production promotion process, which is important for integrity of a deployment of a production model.
Finally with that captured information, deployment becomes simpler. The resulting artifact can be retrieved as a Spark UDF, or turned on as a REST API.