cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
Dive into the world of machine learning on the Databricks platform. Explore discussions on algorithms, model training, deployment, and more. Connect with ML enthusiasts and experts.
cancel
Showing results for 
Search instead for 
Did you mean: 

DB vector search tutorial with GTE large - BPE tokenizer correct?

Kronos
New Contributor II

Hi, 

I was trying to implement a vector search use case based on the databricks example notebook with GTE large: 

https://docs.databricks.com/aws/en/notebooks/source/generative-ai/vector-search-foundation-embedding...

For chunking, the notebook uses BPE tokenization cl100kbase which is the same as in the equivalent example notebook which uses an openAI model.

Is this correct? I couldn't find any info on the used tokenization encoding in the original paper of GTE large as well as in the web. Does GTE large really also use BPE with exactly the same encoding as for the newer open AI models, or is this an error in the tutorial notebook? And one should rather use AutoTokenizer from huggingface?

Thanks!

2 REPLIES 2

MariuszK
Valued Contributor II

I created an vector index using databricks-gte-large-en and it was working fine.

Kronos
New Contributor II

I just found the definition and it is indeed word piece tokenization. 

So I think the tutorial is wrong. 

https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5/blob/main/tokenizer.json

Join Us as a Local Community Builder!

Passionate about hosting events and connecting people? Help us grow a vibrant local community—sign up today to get started!

Sign Up Now