Hey Sara, this Somayeh from VINN Automotive.
As I had already shared with you, I am trying to distribute hyperparameter tuning using hyperopt on a tensorflow.keras model. I am using sparkTrials in my fmin:
spark_trials = SparkTrials(parallelism=4)
...
best_hyperparam = fmin(fn=CNN_HOF,
space=space,
algo=tpe.suggest,
max_evals=tuner_max_evals,
trials=spark_trials)
but I am receiving this error:
TypeError: cannot pickle '_thread.lock' object
the only way the code is working is skipping the trials passing by commenting out the line trials=spark_trials which means there would be no distributed tuning.
I went through the resources you shared, but couldn't find anything working for me though. Any idea how can I fix this?