Artifacts like models, model metadata like the "MLmodel" file, input samples, and other logged artifacts like plots, config, network architectures, are stored as files. While these could be simple local filesystem files when the tracking server is run as a standalone service, typically (as in the case of Databricks's hosted MLflow) they are stored on distributed storage.
The location is determined by the Experiment being logged to, which could be configured to write to any mounted storage. By default in Databricks, it logs to a secured path in the root bucket which is protected by ACLs.
Metadata like params, tags, metrics, notes are logged into a database underpinning the MLflow tracking server, which could be most standard databases. In Databricks that is managed in the control plane.