03-27-2025 10:28 PM
I am trying to train an forecasting model along with Hyperparameters tuning with Hyperopt.
I have multiple time series for "KEY" each of which I want to train a separate model. To do this I am using Spark's applyInPandas to tune and train model for each series in parallel.
The applyInPandas is defined within the MLflow's Parent run. The child run's are nested in the train function which is being called by applyInpandas with repartitioning over "KEY"
However, neither the execution progress even after running for more than an hour nor it fails with any exception.
The code run's fine if I exclude the MLflow tracking.
4 weeks ago
Hi @shubham_lekhwar ,
This is a common context-passing issue when using Spark with MLflow.
The problem is that the nested=True flag in mlflow.start_run relies on an active run being present in the current process context. Your Parent_RUN is active on the driver node, but the build_tune_and_score_model function executes on worker nodes, which are separate processes and have no knowledge of the driver's active run. This causes the MLflow client on the worker to hang, waiting for a parent context that doesn't exist.
The solution is to manually pass the parent run's ID to the worker function and set the parent-child relationship using a tag.
You need to make two changes: one on the driver and one in your worker function.
Get the parent_run_id before calling applyInPandas and use functools.partial to "bake" this ID into the function that Spark will distribute.
Modify the function signature to accept the new parent_run_id argument. Then, instead of nested=True, start a regular run and manually set the parent ID using mlflow.set_tag.
4 weeks ago
Hi @shubham_lekhwar ,
This is a common context-passing issue when using Spark with MLflow.
The problem is that the nested=True flag in mlflow.start_run relies on an active run being present in the current process context. Your Parent_RUN is active on the driver node, but the build_tune_and_score_model function executes on worker nodes, which are separate processes and have no knowledge of the driver's active run. This causes the MLflow client on the worker to hang, waiting for a parent context that doesn't exist.
The solution is to manually pass the parent run's ID to the worker function and set the parent-child relationship using a tag.
You need to make two changes: one on the driver and one in your worker function.
Get the parent_run_id before calling applyInPandas and use functools.partial to "bake" this ID into the function that Spark will distribute.
Modify the function signature to accept the new parent_run_id argument. Then, instead of nested=True, start a regular run and manually set the parent ID using mlflow.set_tag.
Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!
Sign Up Now