cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Hyperopt Ray integration

EmirHodzic
New Contributor II

Hello,

Is there a way to integrate Hyperopt with Ray parallelisation? I have a simulation framework which I want to optimise, and each simulation run is set up to be a Ray process, however I am calling one simulation run in the objective function. This means that each trial in Hyperopt is done sequentially and not utilising the Ray framework.
Is there a way to asynchronously get results for the objective function, or to push a batch of trials to the objective function?
Otherwise I would appreciate if you have any comments or advice.

Thank you in advance!

1 ACCEPTED SOLUTION

Accepted Solutions

Kumaran
Databricks Employee
Databricks Employee

Hi @EmirHodzic 

Thank you for posting your question in the Databricks community.

 You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.

Here's a link to the documentation for HyperOpt and Ray Tune.

Here's a sample code found on ray tune documentation that leverages Ray Tune and HyperOpt to optimize a simple function:

import numpy as np
from hyperopt import hp
from ray import tune

def objective(config):
# This function is run remotely in a different Python process.
return config['a'] ** 2 + config['b'] ** 2

config = {
"a": hp.uniform("a", 0, 1),
"b": hp.uniform("b", -1, 1)
}

analysis = tune.run(
objective,
config=config,
num_samples=100,
algorithm="hyperopt")
print("Best hyperparameters found were: ", analysis.best_config)

Sample tutorial:

https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_...

View solution in original post

2 REPLIES 2

Kumaran
Databricks Employee
Databricks Employee

Hi @EmirHodzic 

Thank you for posting your question in the Databricks community.

 You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.

Here's a link to the documentation for HyperOpt and Ray Tune.

Here's a sample code found on ray tune documentation that leverages Ray Tune and HyperOpt to optimize a simple function:

import numpy as np
from hyperopt import hp
from ray import tune

def objective(config):
# This function is run remotely in a different Python process.
return config['a'] ** 2 + config['b'] ** 2

config = {
"a": hp.uniform("a", 0, 1),
"b": hp.uniform("b", -1, 1)
}

analysis = tune.run(
objective,
config=config,
num_samples=100,
algorithm="hyperopt")
print("Best hyperparameters found were: ", analysis.best_config)

Sample tutorial:

https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_...

EmirHodzic
New Contributor II

Hi @Kumaran,

Thank you so much for the response, I actually wasn't aware that Ray Tune is offering these capabilities as well.

Have a great day!

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group