Hi! I have a custom model registered on unity catalog that I am able load and use for prediction. However, I am unable to deploy same model using the Model Serving UI. Databricks runtime used for model training and deployment is 15.4 ML.
Thanks in advance.
Code Snippet
# Define conda environment or pip requirements
conda_env = {
'name': 'mlflow-env',
'channels': ['defaults'],
'dependencies': [
'python=3.11.11',
'pip',
{
'pip': [
'pyspark==3.5.0',
'mlflow==2.19.0'
]
}
]
}
# set model alias
model_alias = 'macro_vars'
# Log model to MLflow
with mlflow.start_run(run_name=f"{model_alias}_run") as run:
# Fit pipeline to training data
pipeline_model = pipeline.fit(filtered_data)
# Transform data using pipeline
transformed_data = pipeline_model.transform(filtered_data)
# Train logistic regression model
lr_model = LogisticRegression(featuresCol='features', labelCol='CO_flag', maxIter=100)
lr_model_fit = lr_model.fit(transformed_data)
# Make predictions using trained model
predictions = lr_model_fit.transform(transformed_data)
# Log Model
signature = infer_signature(transformed_data.select('features'), predictions.select('prediction'))
mlflow.spark.log_model(
spark_model=lr_model_fit,
artifact_path=model_alias,
signature=signature,
conda_env=conda_env
)
# Register model
catalog_name = "czcl"
schema_name = "czcl_gold"
model_name = "czcl_model"
registered_name = f"{catalog_name}.{schema_name}.{model_name}"
model_uri = f"runs:/{run.info.run_id}/{model_alias}"
result = mlflow.register_model(model_uri, registered_name)
client = mlflow.MlflowClient()
client.set_registered_model_alias(name=registered_name, alias=model_alias, version=result.version)
Error Message:
[wgkkr] 2025-05-22 23:10:44.182 INFO : Initializing .........
[wgkkr] WARNING:root:mlflow-server
[wgkkr] [2025-05-22 23:10:44 +0000] [10] [INFO] Starting gunicorn 23.0.0
[wgkkr] [2025-05-22 23:10:44 +0000] [10] [INFO] Listening at: http://0.0.0.0:8080 (10)
[wgkkr] [2025-05-22 23:10:44 +0000] [10] [INFO] Using worker: sync
[wgkkr] [2025-05-22 23:10:44 +0000] [11] [INFO] Booting worker with pid: 11
[wgkkr] JAVA_HOME is not set
[wgkkr] [2025-05-22 23:10:50 +0000] An error occurred while loading the model: [JAVA_GATEWAY_EXITED] Java gateway process exited before sending its port number.
[wgkkr] [2025-05-22 23:10:50 +0000] Traceback (most recent call last):
[wgkkr] [2025-05-22 23:10:50 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflowserving/scoring_server/__init__.py", line 212, in get_model_option_or_exit
[wgkkr] [2025-05-22 23:10:50 +0000] self.model = self.model_future.result()
[wgkkr] [2025-05-22 23:10:50 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^
[wgkkr] [2025-05-22 23:10:50 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/_base.py", line 449, in result
[wgkkr] [2025-05-22 23:10:50 +0000] return self.__get_result()
[wgkkr] [2025-05-22 23:10:50 +0000] ^^^^^^^^^^^^^^^^^^^
[wgkkr] [2025-05-22 23:10:50 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
[wgkkr] [2025-05-22 23:10:50 +0000] raise self._exception
[wgkkr] [2025-05-22 23:10:50 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/concurrent/futures/thread.py", line 58, in run
[wgkkr] [2025-05-22 23:10:50 +0000] result = self.fn(*self.args, **self.kwargs)
[wgkkr] [2025-05-22 23:10:50 +0000] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[wgkkr] [2025-05-22 23:10:50 +0000] File "/opt/conda/envs/mlflow-env/lib/python3.11/site-packages/mlflowserving/scoring_server/__init__.py", line 132, in _load_model_closure