cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

For tuning hyperparameters with Apache Spark ML / MLlib, when should I use Spark ML's built-in tuning algorithms vs. Hyperopt?

Joseph_B
Databricks Employee
Databricks Employee

When should I use Spark ML's CrossValidator or TrainValidationSplit, vs. a separate tuning tool such as Hyperopt?

1 REPLY 1

Joseph_B
Databricks Employee
Databricks Employee

Both are valid choices. By default, I'd recommend using Hyperopt nowadays. Here's the rationale, as pros & cons of each.

Spark ML's built-in tools

  • Pros: These fit the Spark ML Pipeline framework, so you can keep using the same type of APIs.
  • Cons: These are designed for brute force grid search. That's fine for a small number (say up to ~3) hyperparameters, but it becomes inefficient when you have many hyperparameters or when you want to test many combinations.

Hyperopt

  • Pros: This provides a more adaptive, iterative algorithm for tuning which can be more efficient in terms of the number of hyperparameter settings you need to try to reach a given accuracy. This is especially important when tuning many hyperparameters to testing many settings.
  • Cons: (See pros of Spark ML.)

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now