Hi,
I was trying to implement a vector search use case based on the databricks example notebook with GTE large:
https://docs.databricks.com/aws/en/notebooks/source/generative-ai/vector-search-foundation-embedding...
For chunking, the notebook uses BPE tokenization cl100kbase which is the same as in the equivalent example notebook which uses an openAI model.
Is this correct? I couldn't find any info on the used tokenization encoding in the original paper of GTE large as well as in the web. Does GTE large really also use BPE with exactly the same encoding as for the newer open AI models, or is this an error in the tutorial notebook? And one should rather use AutoTokenizer from huggingface?
Thanks!