Depending on how many you have, different solutions may be appropriate - and conveniently, you can use MLflow as a front end for most of these if you're working in Python. If you're working on personal projects, a local MLflow instance might be the right call. However, you can change the MLflow backend to be a database or remote, so you can store your trained models in the cloud (AWS, Google Cloud, etc.) or on remote resources not in the cloud (on-premises) including those with an HTTP endpoint.
For more information, here's the MLflow documentation with additional resources on backend stores, including on Databricks integration.