cancel
Showing results for 
Search instead for 
Did you mean: 
Community Platform Discussions
Connect with fellow community members to discuss general topics related to the Databricks platform, industry trends, and best practices. Share experiences, ask questions, and foster collaboration within the community.
cancel
Showing results for 
Search instead for 
Did you mean: 

Error at model serving for quantised models using bitsandbytes library

phi_alpaca
New Contributor III

Hello,

I've been trying to serve registered MLflow models at GPU Model Serving Endpoint, which works except for the models using bitsandbytes library. The library is used to quantise the LLM models into 4-bit/ 8-bit (e.g. Mistral-7B), however, it runs into error while registering at endpoint. This error is shown in the service log:

phi_alpaca_1-1708013174746.png

All libraries needed are registered in the requirements.txt files, it looks like one option to fix the error is to run a bash script to help it locate the right path of the package, but we're not able to do so at serving endpoint.

Has anyone successfully served a quantised LLM model at Databricks model serving using bitsandbytes? If so, how do you get around it? Any help on the topic would be much appreciated.

Thanks

 

8 REPLIES 8

G-M
Contributor

Hi @phi_alpaca , we are facing exactly the same issue trying to serve a bitsandbytes quantized version of Mixtral-8x7B . Did you have any progress resolving this? The answer from @Retired_mod isn't too helpful and seems to be AI-generated...

As you say, the deployed container is such a black box that we can't take the diagnostic steps listed in the error output.

phi_alpaca
New Contributor III

Hey @G-M , thanks for sharing your experience as well. Unfortunately I haven't had any luck on my end for resolving this. Would be interested to know if you have any breakthrough down the line. Is it something Databricks might be able to put a small fix in please? @Retired_mod 
Thanks

JAgreenskylake
New Contributor II

Hi, @phi_alpaca have you managed to solve this? We have a similar issue.. 

Hey @JAgreenskylake , no luck so far. I have been working around it by not using quantised models, which is not ideal, so really hope it's possible to do that soon.

G-M
Contributor

@phi_alpaca

We have solved it by providing a conda_env.yaml when we log the model, all we needed was to add cudatoolkit=11.8 to the dependencies. 

phi_alpaca
New Contributor III

Thanks so much for sharing and glad it worked out for you guys!
I will have a go and feed back.

phi_alpaca
New Contributor III

I seem to have some compatibility issues with cudatoolkit=11.8, would it be possible for you share what versions you use for torch, transformers, accelerate, and bitsandbytes? Thanks!

These versions are working for us:

torch==1.13.1
transformers==4.35.2
accelerate==0.25.0
bitsandbytes==0.41.3

Connect with Databricks Users in Your Area

Join a Regional User Group to connect with local Databricks users. Events will be happening in your city, and you won’t want to miss the chance to attend and share knowledge.

If there isn’t a group near you, start one and help create a community that brings people together.

Request a New Group