Hi @EmirHodzic
Thank you for posting your question in the Databricks community.
You can use Ray Tune, a tuning library that integrates with Ray, to parallelize your Hyperopt trials across multiple nodes.
Here's a link to the documentation for HyperOpt and Ray Tune.
Here's a sample code found on ray tune documentation that leverages Ray Tune and HyperOpt to optimize a simple function:
import numpy as np
from hyperopt import hp
from ray import tune
def objective(config):
# This function is run remotely in a different Python process.
return config['a'] ** 2 + config['b'] ** 2
config = {
"a": hp.uniform("a", 0, 1),
"b": hp.uniform("b", -1, 1)
}
analysis = tune.run(
objective,
config=config,
num_samples=100,
algorithm="hyperopt")
print("Best hyperparameters found were: ", analysis.best_config)
Sample tutorial:
https://colab.research.google.com/github/ray-project/tutorial/blob/master/tune_exercises/exercise_2_...