If you do not define any storage yourself, data is stored as managed tables, meaning in the blob storage of the databricks subscription (which resides on the cloud provider you use).
If you use your own blob storage/data lake, you can (don't have to but you can) write your data there, as unmanaged tables.
But basically you can store it anywhere you want in the cloud, as long as databricks can access it.
DBFS is a semantic layer on top of actual storage, to make working with files more easy.
So if you mounted 3 blob storage f.e., you can write to any of these 3.
Converting to delta:
https://docs.microsoft.com/en-us/azure/databricks/spark/latest/spark-sql/language-manual/delta-conve....
But you could also choose to write to another location so data is copied and saved in delta lake format.