Different tokenizer silently being loaded based on trust_remote_code · Issue #34882 · huggingface/transformers (original) (raw)

System Info

python==3.9.20
transformers==4.46.2

Who can help?

@ArthurZucker

Information

Tasks

Reproduction

I'm trying to use Alibaba-NLP/gte-Qwen2-1.5B-instruct in vLLM, but found that the tokenizer that is loaded from HF Transformers has different padding_side based on the trust_remote_code setting:

from transformers import AutoTokenizer AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=False) Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False), added_tokens_decoder={ 151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), } AutoTokenizer.from_pretrained("Alibaba-NLP/gte-Qwen2-1.5B-instruct", trust_remote_code=True) Qwen2TokenizerFast(name_or_path='Alibaba-NLP/gte-Qwen2-1.5B-instruct', vocab_size=151643, model_max_length=32768, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'eos_token': '<|endoftext|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False), added_tokens_decoder={ 151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), 151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True), }

This setting significantly impacts the output of the embedding model. Is this intended behavior? I suspect no error is being raised because the name of the custom tokenizer in this HF repo (see relevant file) is Qwen2TokenizerFast which is also defined in core Transformers.

Expected behavior

To avoid accidentally loading the wrong tokenizer, the code should raise an error if a custom tokenizer is defined in the HF repo but trust_remote_code=False, regardless of whether there is a tokenizer with the same name defined inside core Transformers.