cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

What are the advantages of using Delta if I am using MLflow? How is Delta useful for DS/ML use cases?

Anonymous
Not applicable

I am already using MLflow. What benefit would Delta provide me since I am not really working on Data engineering workloads

1 ACCEPTED SOLUTION

Accepted Solutions

sean_owen
Databricks Employee
Databricks Employee

Because Delta can version data, it becomes useful for reproducibility and debugging of models. Weeks later you could see exactly how the table looked when the model was built. MLflow's "Spark" autologging actually helps automatically capture and log this version information when Delta is used in a Databricks notebook.

Its transactional writes are useful, as a modeling job does not need to worry about other data engineering jobs writing to the same data source at the same time. To a lesser extent, being able to write Delta Live Tables and/or being able to roll back bad writes increases the reliability of upstream data, which helps with downstream reliability of ML jobs.

View solution in original post

2 REPLIES 2

sean_owen
Databricks Employee
Databricks Employee

Because Delta can version data, it becomes useful for reproducibility and debugging of models. Weeks later you could see exactly how the table looked when the model was built. MLflow's "Spark" autologging actually helps automatically capture and log this version information when Delta is used in a Databricks notebook.

Its transactional writes are useful, as a modeling job does not need to worry about other data engineering jobs writing to the same data source at the same time. To a lesser extent, being able to write Delta Live Tables and/or being able to roll back bad writes increases the reliability of upstream data, which helps with downstream reliability of ML jobs.

Sebastian
Contributor

The most important aspect is your experiment can track the version of the data table. So during audits you will be able to trace back why a specific prediction was made.

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group