Because Delta can version data, it becomes useful for reproducibility and debugging of models. Weeks later you could see exactly how the table looked when the model was built. MLflow's "Spark" autologging actually helps automatically capture and log this version information when Delta is used in a Databricks notebook.
Its transactional writes are useful, as a modeling job does not need to worry about other data engineering jobs writing to the same data source at the same time. To a lesser extent, being able to write Delta Live Tables and/or being able to roll back bad writes increases the reliability of upstream data, which helps with downstream reliability of ML jobs.