cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Hyperopt Ray integration

EmirHodzic
New Contributor II

Hello,

Is there a way to integrate Hyperopt with Ray parallelisation? I have a simulation framework which I want to optimise, and each simulation run is set up to be a Ray process, however I am calling one simulation run in the objective function. This means that each trial in Hyperopt is done sequentially and not utilising the Ray framework.
Is there a way to asynchronously get results for the objective function, or to push a batch of trials to the objective function?
Otherwise I would appreciate if you have any comments or advice.

Thank you in advance!

1 ACCEPTED SOLUTION

Accepted Solutions

Kumaran
Databricks Employee
Databricks Employee

Hi @EmirHodzic 

Thank you for posting your question in the Databricks community.

 You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.

Here's a link to the documentation for HyperOpt and Ray Tune.

Here's a sample code found on ray tune documentation that leverages Ray Tune and HyperOpt to optimize a simple function:

import numpy as np
from hyperopt import hp
from ray import tune

def objective(config):
# This function is run remotely in a different Python process.
return config['a'] ** 2 + config['b'] ** 2

config = {
"a": hp.uniform("a", 0, 1),
"b": hp.uniform("b", -1, 1)
}

analysis = tune.run(
objective,
config=config,
num_samples=100,
algorithm="hyperopt")
print("Best hyperparameters found were: ", analysis.best_config)

Sample tutorial:

https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_...

View solution in original post

2 REPLIES 2

Kumaran
Databricks Employee
Databricks Employee

Hi @EmirHodzic 

Thank you for posting your question in the Databricks community.

 You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.

Here's a link to the documentation for HyperOpt and Ray Tune.

Here's a sample code found on ray tune documentation that leverages Ray Tune and HyperOpt to optimize a simple function:

import numpy as np
from hyperopt import hp
from ray import tune

def objective(config):
# This function is run remotely in a different Python process.
return config['a'] ** 2 + config['b'] ** 2

config = {
"a": hp.uniform("a", 0, 1),
"b": hp.uniform("b", -1, 1)
}

analysis = tune.run(
objective,
config=config,
num_samples=100,
algorithm="hyperopt")
print("Best hyperparameters found were: ", analysis.best_config)

Sample tutorial:

https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_...

EmirHodzic
New Contributor II

Hi @Kumaran,

Thank you so much for the response, I actually wasn't aware that Ray Tune is offering these capabilities as well.

Have a great day!

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local communityโ€”sign up today to get started!

Sign Up Now