I am trying to serve an ALS pyspark model with a custom transformer(for generating user-specific recommendations) via a pyfunc wrapper. Although I can successfully score the logged model, the serving endpoint is throwing the following error.
URI '/model/artifacts/./sparkml' does not point to the current DFS.
File '/model/artifacts/./sparkml' not found on DFS. Will attempt to upload the file.
An error occurred while loading the model. 'NoneType' object has no attribute 'jvm'.
Following is a brief summary of the code snippets used for model training, logging, and the setup for model serving:
- Model training and logging to MLflow Model Registry with additional requirements and code paths specified.
- Custom PySpark ML Transformer implementation for generating user-specific recommendations.
- Python pyfunc wrapper for serving the model
- Attempt to serve the model using Databricks' MLflow serving feature, leading to the aforementioned error