cancel
Showing results for 
Search instead for 
Did you mean: 
Data Engineering
Join discussions on data engineering best practices, architectures, and optimization strategies within the Databricks Community. Exchange insights and solutions with fellow data engineers.
cancel
Showing results for 
Search instead for 
Did you mean: 

Hyperopt Error: There are no evaluation tasks, cannot return argmin of task losses.

ChingizK
New Contributor III

The trials succeed when the cell in the notebook is executed manually:

01.png

However, the same process fails when executed as a Workflow:

 
02.png

The error simply says that there's an issue with the objective function. However how can that be the case if I'm able to successfully run the exact same code when I manually run the notebook cell? The notebook run fails when triggered through a Workflow run.

Unfortunately, changing compute cluster had no effect either.

Task fails with the following error: Exception: There are no evaluation tasks, cannot return argmin of task losses.

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz_Fatma
Community Manager
Community Manager

Hi @ChingizK ,  Error message: "Exception: There are no evaluation tasks, cannot return argmin of task losses".

 This error occurs when there are no successful evaluations of the objective function. 

Possible reasons for the error when running the code manually but not through a Workflow:


  - Differences in execution environments or contexts

Suggestions to investigate:
  1. Check the execution context and environment variables to ensure they are the same when running manually and through the Workflow
  2. Verify that the objective function can handle unexpected inputs and return a valid loss value in all cases
  3. Confirm that the MLflow tracking URI is set correctly
  4. If using SparkTrials for distributed hyperparameter tuning, be aware that MLflow runs may not be nested under the parent run.

5. Use MLflow to debug the issue by checking the MLflow Runs Table for failed runs and examining the logs, parameters, and metrics for anomalies

View solution in original post

1 REPLY 1

Kaniz_Fatma
Community Manager
Community Manager

Hi @ChingizK ,  Error message: "Exception: There are no evaluation tasks, cannot return argmin of task losses".

 This error occurs when there are no successful evaluations of the objective function. 

Possible reasons for the error when running the code manually but not through a Workflow:


  - Differences in execution environments or contexts

Suggestions to investigate:
  1. Check the execution context and environment variables to ensure they are the same when running manually and through the Workflow
  2. Verify that the objective function can handle unexpected inputs and return a valid loss value in all cases
  3. Confirm that the MLflow tracking URI is set correctly
  4. If using SparkTrials for distributed hyperparameter tuning, be aware that MLflow runs may not be nested under the parent run.

5. Use MLflow to debug the issue by checking the MLflow Runs Table for failed runs and examining the logs, parameters, and metrics for anomalies

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group