cancel
Showing results forย 
Search instead forย 
Did you mean:ย 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Databricks AutoML (Forecasting) Python SDK for Model Serving

Rajib_Kumar_De
New Contributor II

I am using Databricks AutoML ( Python SDK) to forecast bed occupancy. (Actually, Databricks used MLflow experiments for AutoML run). After training with different iterations, I registered the best model in the Databricks Model registry. Now I am trying to serve the register model and I have seen that it is always in the "Pending" stage and in the deployment log has an attached error message.

Can anyone please help me here?

Note: Model artifacts has been created automatically as its from AutoML run. I think I don't have control to add pip dependency and package version.

Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies.  Conda may not use the correct pip to install your packages, and they may end up in the wrong place.  Please add an explicit pip dependency.  I'm adding one for you, but still nagging you.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Pip subprocess error:
  error: subprocess-exited-with-error
  
  ร— python setup.py egg_info did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [8 lines of output]
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "/tmp/pip-install-_4rq4zju/numba_f054bd7eb658421087c86678b088efff/setup.py", line 51, in <module>
          _guard_py_ver()
        File "/tmp/pip-install-_4rq4zju/numba_f054bd7eb658421087c86678b088efff/setup.py", line 48, in _guard_py_ver
          raise RuntimeError(msg.format(cur_py, min_py, max_py))
      RuntimeError: Cannot install on Python version 3.11.0; only versions >=3.7,<3.11 are supported.
      [end of output]
  
  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
 
ร— Encountered error while generating package metadata.
โ•ฐโ”€> See above for output.
 
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

This issue I am facing only for Databricks AutoML forecasting, not getting this issue for Databricks AutoML for Regression and Classification.

3 REPLIES 3

Debayan
Databricks Employee
Databricks Employee

Hi,

The error says it only supports versions <3.11 , have you tried installing any version which is less than 3.11 and run?

Rajib_Kumar_De
New Contributor II

Hi Debayan - Thanks for your reply.

As mentioned, I used Databricks AutoML for forecasting. Env and dependencies created automatically. Although, In artifact path, after AutoML run, generates bellow config and there python version 3.9.5. But not sure in deployment log why it is trying to install Python 3.11.

artifact_path: model
databricks_runtime: 12.0.x-cpu-ml-scala2.12
flavors:
  python_function:
    cloudpickle_version: 2.0.0
    env:
      conda: conda.yaml
      virtualenv: python_env.yaml
    loader_module: mlflow.pyfunc.model
    python_model: python_model.pkl
    python_version: 3.9.5
mlflow_version: 2.0.1
model_uuid: f757b91f3a9f4fdd844b9bb9c8ef375c
run_id: 7a62921b7909443bb58b4563d974c84f
signature:
  inputs: '[{"name": "Date", "type": "datetime"}]'
  outputs: '[{"name": "yhat", "type": "double"}]'

Debayan
Databricks Employee
Databricks Employee

Hi, It can be a bug if the python version is 3.9.5 and still the error is on compatibility. Could you please raise a support case to look into it further?

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you wonโ€™t want to miss the chance to attend and share knowledge.

If there isnโ€™t a group near you, start one and help create a community that brings people together.

Request a New Group