cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

Content Type error legacy serving

semsim
New Contributor III

Hi,

I have deployed an endpoint in Databricks using legacy serving. I am using the custom pyfunc in mlflow to deploy the custom code. This code uses Machine Learning to parse out the table of contents in some pdf files then returns the table of contents in a csv. Not your typical scoring/prediction model. While I was able to get the code deployed using legacy serving, I am having issues querying the endpoint. I receive the following error from the cluster log: 

AttributeError: 'NoneType' object has no attribute 'split'
2024/05/07 13:51:16 ERROR mlflow.pyfunc.scoring_server: Exception on /invocations [POST]
Traceback (most recent call last):
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/flask/app.py", line 1473, in wsgi_app
    response = self.full_dispatch_request()
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/flask/app.py", line 882, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
    rv = self.dispatch_request()
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)  # type: ignore[no-any-return]
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/mlflow/server/handlers.py", line 508, in wrapper
    return func(*args, **kwargs)
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/mlflow/pyfunc/scoring_server/__init__.py", line 443, in transformation
    result = invocations(data, content_type, model, input_schema)
  File "/databricks/conda/envs/model-1/lib/python3.10/site-packages/mlflow/pyfunc/scoring_server/__init__.py", line 302, in invocations
    type_parts = list(map(str.strip, content_type.split(";")))
AttributeError: 'NoneType' object has no attribute 'split'

From what I understand of this error is that my content_type header appears to be a "none" object. When I try to query the model code via postman I have a content type of application/json. Also my request body is empty..any ideas?

1 ACCEPTED SOLUTION

Accepted Solutions

Kaniz
Community Manager
Community Manager

Hi @semsim,

  • Ensure that you’re setting the Content-Type header correctly when making requests to your model endpoint. Since you mentioned using Postman, make sure you set the header to application/json.
  • Verify that the request body is also correctly formatted as JSON.
  • Please confirm that you have successfully registered your custom model in the Unity Catalog or the workspace registry after logging in using MLflow.
  • Double-check the model’s signature and input examples. Adding a signature is recommended for logging models to the Unity Catalog.
  • Make sure that any dependencies required by your custom code are correctly specified in the conda environment used for serving.
  • Explicitly specify dependencies using the conda_env parameter when logging your model with mlflow.pyfunc.log_model.
  • If you’re using an older version of MLflow, consider updating to a more recent version. Older versions can sometimes cause compatibility issues.
  • Conversely, if you’re using a newer version, ensure that your request format aligns with the expected format for that version.

Let me know if you need any more help!

 

View solution in original post

1 REPLY 1

Kaniz
Community Manager
Community Manager

Hi @semsim,

  • Ensure that you’re setting the Content-Type header correctly when making requests to your model endpoint. Since you mentioned using Postman, make sure you set the header to application/json.
  • Verify that the request body is also correctly formatted as JSON.
  • Please confirm that you have successfully registered your custom model in the Unity Catalog or the workspace registry after logging in using MLflow.
  • Double-check the model’s signature and input examples. Adding a signature is recommended for logging models to the Unity Catalog.
  • Make sure that any dependencies required by your custom code are correctly specified in the conda environment used for serving.
  • Explicitly specify dependencies using the conda_env parameter when logging your model with mlflow.pyfunc.log_model.
  • If you’re using an older version of MLflow, consider updating to a more recent version. Older versions can sometimes cause compatibility issues.
  • Conversely, if you’re using a newer version, ensure that your request format aligns with the expected format for that version.

Let me know if you need any more help!

 
Join 100K+ Data Experts: Register Now & Grow with Us!

Excited to expand your horizons with us? Click here to Register and begin your journey to success!

Already a member? Login and join your local regional user group! If there isn’t one near you, fill out this form and we’ll create one for you to join!