As I am moving my first steps within the Databricks Machine Learning Workspace, I am getting confused by some features that by "documentation" seem to overlap.
Does autolog for spark on mlflow provide different tracking than using a training set created via a feature store client? Also, how does FeatureStoreClient.log_model() relate with MLFlow?