Tokenizer (original) (raw)

A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library 🤗 Tokenizers. The “Fast” implementations allows:

  1. a significant speed-up in particular when doing batched tokenization and
  2. additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).

The base classes PreTrainedTokenizer and PreTrainedTokenizerFastimplement the common methods for encoding string inputs in model inputs (see below) and instantiating/saving python and “Fast” tokenizers either from a local file or directory or from a pretrained tokenizer provided by the library (downloaded from HuggingFace’s AWS S3 repository). They both rely onPreTrainedTokenizerBase that contains the common methods, andSpecialTokensMixin.

PreTrainedTokenizer and PreTrainedTokenizerFast thus implement the main methods for using all the tokenizers:

BatchEncoding holds the output of thePreTrainedTokenizerBase’s encoding methods (__call__,encode_plus and batch_encode_plus) and is derived from a Python dictionary. When the tokenizer is a pure python tokenizer, this class behaves just like a standard python dictionary and holds the various model inputs computed by these methods (input_ids, attention_mask…). When the tokenizer is a “Fast” tokenizer (i.e., backed by HuggingFace tokenizers library), this class provides in addition several advanced alignment methods which can be used to map between the original string (character and words) and the token space (e.g., getting the index of the token comprising a given character or the span of characters corresponding to a given token).

Multimodal Tokenizer

Apart from that each tokenizer can be a “multimodal” tokenizer which means that the tokenizer will hold all relevant special tokens as part of tokenizer attributes for easier access. For example, if the tokenizer is loaded from a vision-language model like LLaVA, you will be able to access tokenizer.image_token_id to obtain the special image token used as a placeholder.

To enable extra special tokens for any type of tokenizer, you have to add the following lines and save the tokenizer. Extra special tokens do not have to be modality related and can ne anything that the model often needs access to. In the below code, tokenizer at output_dir will have direct access to three more special tokens.

vision_tokenizer = AutoTokenizer.from_pretrained( "llava-hf/llava-1.5-7b-hf", extra_special_tokens={"image_token": "", "boi_token": "", "eoi_token": ""} ) print(vision_tokenizer.image_token, vision_tokenizer.image_token_id) ("", 32000)

PreTrainedTokenizer

class transformers.PreTrainedTokenizer

< source >

( **kwargs )

Parameters

Base class for all slow tokenizers.

Inherits from PreTrainedTokenizerBase.

Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary.

This class also contain the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)

__call__

< source >

( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy, NoneType] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding

Parameters

A BatchEncoding with the following fields:

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

add_tokens

< source >

( new_tokens: typing.Union[str, tokenizers.AddedToken, typing.List[typing.Union[str, tokenizers.AddedToken]]] special_tokens: bool = False ) → int

Parameters

Number of tokens added to the vocabulary.

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.

Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the resize_token_embeddings() method.

Examples:

tokenizer = BertTokenizerFast.from_pretrained("google-bert/bert-base-uncased") model = BertModel.from_pretrained("google-bert/bert-base-uncased")

num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"]) print("We have added", num_added_toks, "tokens")

model.resize_token_embeddings(len(tokenizer))

add_special_tokens

< source >

( special_tokens_dict: typing.Dict[str, typing.Union[str, tokenizers.AddedToken]] replace_additional_special_tokens = True ) → int

Parameters

Number of tokens added to the vocabulary.

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the resize_token_embeddings() method.

Using add_special_tokens will ensure your special tokens can be used in several ways:

When possible, special tokens are already registered for provided pretrained models (for instanceBertTokenizer cls_token is already registered to be :obj_’[CLS]’_ and XLM’s one is also registered to be'</s>').

Examples:

tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") model = GPT2Model.from_pretrained("openai-community/gpt2")

special_tokens_dict = {"cls_token": ""}

num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print("We have added", num_added_toks, "tokens")

model.resize_token_embeddings(len(tokenizer))

assert tokenizer.cls_token == ""

apply_chat_template

< source >

( conversation: typing.Union[typing.List[typing.Dict[str, str]], typing.List[typing.List[typing.Dict[str, str]]]] tools: typing.Optional[typing.List[typing.Union[typing.Dict, typing.Callable]]] = None documents: typing.Optional[typing.List[typing.Dict[str, str]]] = None chat_template: typing.Optional[str] = None add_generation_prompt: bool = False continue_final_message: bool = False tokenize: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: bool = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_dict: bool = False return_assistant_tokens_mask: bool = False tokenizer_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None **kwargs ) → Union[List[int], Dict]

Parameters

Returns

Union[List[int], Dict]

A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate(). If return_dict is set, will return a dict of tokenizer outputs instead.

Converts a list of dictionaries with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting.

batch_decode

< source >

( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: typing.Optional[bool] = None **kwargs ) → List[str]

Parameters

The list of decoded sentences.

Convert a list of lists of token ids into a list of strings by calling decode.

decode

< source >

( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: typing.Optional[bool] = None **kwargs ) → str

Parameters

The decoded sentence.

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

encode

< source >

( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy, NoneType] = None max_length: typing.Optional[int] = None stride: int = 0 padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → List[int], torch.Tensor, tf.Tensor or np.ndarray

Parameters

Returns

List[int], torch.Tensor, tf.Tensor or np.ndarray

The tokenized ids of the text.

Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.

Same as doing self.convert_tokens_to_ids(self.tokenize(text)).

push_to_hub

< source >

( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '5GB' create_pr: bool = False safe_serialization: bool = True revision: typing.Optional[str] = None commit_description: typing.Optional[str] = None tags: typing.Optional[list[str]] = None **deprecated_kwargs )

Parameters

Upload the tokenizer files to the 🤗 Model Hub.

Examples:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")

tokenizer.push_to_hub("my-finetuned-bert")

tokenizer.push_to_hub("huggingface/my-finetuned-bert")

convert_ids_to_tokens

< source >

( ids: typing.Union[int, list[int]] skip_special_tokens: bool = False ) → str or List[str]

Parameters

The decoded token(s).

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

convert_tokens_to_ids

< source >

( tokens: typing.Union[str, list[str]] ) → int or List[int]

Parameters

The token id or list of token ids.

Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary.

Returns the added tokens in the vocabulary as a dictionary of token to index. Results might be different from the fast call because for now we always add the tokens even if they are already in the vocabulary. This is something we should change.

num_special_tokens_to_add

< source >

( pair: bool = False ) → int

Parameters

Number of special tokens added to sequences.

Returns the number of added tokens when encoding a sequence with special tokens.

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

prepare_for_tokenization

< source >

( text: str is_split_into_words: bool = False **kwargs ) → Tuple[str, Dict[str, Any]]

Parameters

Returns

Tuple[str, Dict[str, Any]]

The prepared text and the unused kwargs.

Performs any necessary transformations before tokenization.

This method should pop the arguments from kwargs and return the remaining kwargs as well. We test thekwargs at the end of the encoding process to be sure all the arguments have been used.

tokenize

< source >

( text: str **kwargs ) → List[str]

Parameters

The list of tokens.

Converts a string into a sequence of tokens, using the tokenizer.

Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens.

PreTrainedTokenizerFast

The PreTrainedTokenizerFast depend on the tokenizers library. The tokenizers obtained from the 🤗 tokenizers library can be loaded very simply into 🤗 transformers. Take a look at the Using tokenizers from 🤗 tokenizers page to understand how this is done.

class transformers.PreTrainedTokenizerFast

< source >

( *args **kwargs )

Parameters

Base class for all fast tokenizers (wrapping HuggingFace tokenizers library).

Inherits from PreTrainedTokenizerBase.

Handles all the shared methods for tokenization and special tokens, as well as methods for downloading/caching/loading pretrained tokenizers, as well as adding tokens to the vocabulary.

This class also contains the added tokens in a unified way on top of all tokenizers so we don’t have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece…).

Class attributes (overridden by derived classes)

__call__

< source >

( text: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_pair: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None text_pair_target: typing.Union[str, typing.List[str], typing.List[typing.List[str]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy, NoneType] = None max_length: typing.Optional[int] = None stride: int = 0 is_split_into_words: bool = False pad_to_multiple_of: typing.Optional[int] = None padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding

Parameters

A BatchEncoding with the following fields:

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences.

add_tokens

< source >

( new_tokens: typing.Union[str, tokenizers.AddedToken, typing.List[typing.Union[str, tokenizers.AddedToken]]] special_tokens: bool = False ) → int

Parameters

Number of tokens added to the vocabulary.

Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary and will be isolated before the tokenization algorithm is applied. Added tokens and tokens from the vocabulary of the tokenization algorithm are therefore not treated in the same way.

Note, when adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the resize_token_embeddings() method.

Examples:

tokenizer = BertTokenizerFast.from_pretrained("google-bert/bert-base-uncased") model = BertModel.from_pretrained("google-bert/bert-base-uncased")

num_added_toks = tokenizer.add_tokens(["new_tok1", "my_new-tok2"]) print("We have added", num_added_toks, "tokens")

model.resize_token_embeddings(len(tokenizer))

add_special_tokens

< source >

( special_tokens_dict: typing.Dict[str, typing.Union[str, tokenizers.AddedToken]] replace_additional_special_tokens = True ) → int

Parameters

Number of tokens added to the vocabulary.

Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the current vocabulary).

When adding new tokens to the vocabulary, you should make sure to also resize the token embedding matrix of the model so that its embedding matrix matches the tokenizer.

In order to do that, please use the resize_token_embeddings() method.

Using add_special_tokens will ensure your special tokens can be used in several ways:

When possible, special tokens are already registered for provided pretrained models (for instanceBertTokenizer cls_token is already registered to be :obj_’[CLS]’_ and XLM’s one is also registered to be'</s>').

Examples:

tokenizer = GPT2Tokenizer.from_pretrained("openai-community/gpt2") model = GPT2Model.from_pretrained("openai-community/gpt2")

special_tokens_dict = {"cls_token": ""}

num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) print("We have added", num_added_toks, "tokens")

model.resize_token_embeddings(len(tokenizer))

assert tokenizer.cls_token == ""

apply_chat_template

< source >

( conversation: typing.Union[typing.List[typing.Dict[str, str]], typing.List[typing.List[typing.Dict[str, str]]]] tools: typing.Optional[typing.List[typing.Union[typing.Dict, typing.Callable]]] = None documents: typing.Optional[typing.List[typing.Dict[str, str]]] = None chat_template: typing.Optional[str] = None add_generation_prompt: bool = False continue_final_message: bool = False tokenize: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: bool = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_dict: bool = False return_assistant_tokens_mask: bool = False tokenizer_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None **kwargs ) → Union[List[int], Dict]

Parameters

Returns

Union[List[int], Dict]

A list of token ids representing the tokenized chat so far, including control tokens. This output is ready to pass to the model, either directly or via methods like generate(). If return_dict is set, will return a dict of tokenizer outputs instead.

Converts a list of dictionaries with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting.

batch_decode

< source >

( sequences: typing.Union[typing.List[int], typing.List[typing.List[int]], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: typing.Optional[bool] = None **kwargs ) → List[str]

Parameters

The list of decoded sentences.

Convert a list of lists of token ids into a list of strings by calling decode.

decode

< source >

( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: typing.Optional[bool] = None **kwargs ) → str

Parameters

The decoded sentence.

Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces.

Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)).

encode

< source >

( text: typing.Union[str, typing.List[str], typing.List[int]] text_pair: typing.Union[str, typing.List[str], typing.List[int], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy, NoneType] = None max_length: typing.Optional[int] = None stride: int = 0 padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) → List[int], torch.Tensor, tf.Tensor or np.ndarray

Parameters

Returns

List[int], torch.Tensor, tf.Tensor or np.ndarray

The tokenized ids of the text.

Converts a string to a sequence of ids (integer), using the tokenizer and vocabulary.

Same as doing self.convert_tokens_to_ids(self.tokenize(text)).

push_to_hub

< source >

( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '5GB' create_pr: bool = False safe_serialization: bool = True revision: typing.Optional[str] = None commit_description: typing.Optional[str] = None tags: typing.Optional[list[str]] = None **deprecated_kwargs )

Parameters

Upload the tokenizer files to the 🤗 Model Hub.

Examples:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-cased")

tokenizer.push_to_hub("my-finetuned-bert")

tokenizer.push_to_hub("huggingface/my-finetuned-bert")

convert_ids_to_tokens

< source >

( ids: typing.Union[int, list[int]] skip_special_tokens: bool = False ) → str or List[str]

Parameters

The decoded token(s).

Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens.

convert_tokens_to_ids

< source >

( tokens: typing.Union[str, collections.abc.Iterable[str]] ) → int or List[int]

Parameters

The token id or list of token ids.

Converts a token string (or a sequence of tokens) in a single integer id (or a Iterable of ids), using the vocabulary.

Returns the added tokens in the vocabulary as a dictionary of token to index.

num_special_tokens_to_add

< source >

( pair: bool = False ) → int

Parameters

Number of special tokens added to sequences.

Returns the number of added tokens when encoding a sequence with special tokens.

This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop.

set_truncation_and_padding

< source >

( padding_strategy: PaddingStrategy truncation_strategy: TruncationStrategy max_length: int stride: int pad_to_multiple_of: typing.Optional[int] padding_side: typing.Optional[str] )

Parameters

Define the truncation and the padding strategies for fast tokenizers (provided by HuggingFace tokenizers library) and restore the tokenizer settings afterwards.

The provided tokenizer has no padding / truncation strategy before the managed section. If your tokenizer set a padding / truncation strategy before, then it will be reset to no padding / truncation when exiting the managed section.

train_new_from_iterator

< source >

( text_iterator vocab_size length = None new_special_tokens = None special_tokens_map = None **kwargs ) → PreTrainedTokenizerFast

Parameters

A new tokenizer of the same type as the original one, trained ontext_iterator.

Trains a tokenizer on a new corpus with the same defaults (in terms of special tokens or tokenization pipeline) as the current one.

BatchEncoding

class transformers.BatchEncoding

< source >

( data: typing.Optional[typing.Dict[str, typing.Any]] = None encoding: typing.Union[tokenizers.Encoding, typing.Sequence[tokenizers.Encoding], NoneType] = None tensor_type: typing.Union[NoneType, str, transformers.utils.generic.TensorType] = None prepend_batch_axis: bool = False n_sequences: typing.Optional[int] = None )

Parameters

Holds the output of the call(),encode_plus() andbatch_encode_plus() methods (tokens, attention_masks, etc).

This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.

char_to_token

< source >

( batch_or_char_index: int char_index: typing.Optional[int] = None sequence_index: int = 0 ) → int

Parameters

Index of the token, or None if the char index refers to a whitespace only token and whitespace is trimmed with trim_offsets=True.

Get the index of the token in the encoded output comprising a character in the original string for a sequence of the batch.

Can be called as:

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

char_to_word

< source >

( batch_or_char_index: int char_index: typing.Optional[int] = None sequence_index: int = 0 ) → int or List[int]

Parameters

Index or indices of the associated encoded token(s).

Get the word in the original string corresponding to a character in the original string of a sequence of the batch.

Can be called as:

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

convert_to_tensors

< source >

( tensor_type: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None prepend_batch_axis: bool = False )

Parameters

Convert the inner content to tensors.

sequence_ids

< source >

( batch_index: int = 0 ) → List[Optional[int]]

Parameters

Returns

List[Optional[int]]

A list indicating the sequence id corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding sequence.

Return a list mapping the tokens to the id of their original sentences:

to

< source >

( device: typing.Union[str, ForwardRef('torch.device')] non_blocking: bool = False ) → BatchEncoding

Parameters

The same instance after modification.

Send all values to device by calling v.to(device, non_blocking=non_blocking) (PyTorch only).

token_to_chars

< source >

( batch_or_token_index: int token_index: typing.Optional[int] = None ) → CharSpan

Parameters

Span of characters in the original string, or None, if the token (e.g. , ) doesn’t correspond to any chars in the origin string.

Get the character span corresponding to an encoded token in a sequence of the batch.

Character spans are returned as a CharSpan with:

Can be called as:

token_to_sequence

< source >

( batch_or_token_index: int token_index: typing.Optional[int] = None ) → int

Parameters

Index of the word in the input sequence.

Get the index of the sequence represented by the given token. In the general use case, this method returns 0for a single sequence or the first sequence of a pair, and 1 for the second sequence of a pair

Can be called as:

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

token_to_word

< source >

( batch_or_token_index: int token_index: typing.Optional[int] = None ) → int

Parameters

Index of the word in the input sequence.

Get the index of the word corresponding (i.e. comprising) to an encoded token in a sequence of the batch.

Can be called as:

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e., words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

tokens

< source >

( batch_index: int = 0 ) → List[str]

Parameters

The list of tokens at that index.

Return the list of tokens (sub-parts of the input strings after word/subword splitting and before conversion to integer indices) at a given batch index (only works for the output of a fast tokenizer).

word_ids

< source >

( batch_index: int = 0 ) → List[Optional[int]]

Parameters

Returns

List[Optional[int]]

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

word_to_chars

< source >

( batch_or_word_index: int word_index: typing.Optional[int] = None sequence_index: int = 0 ) → CharSpan or List[CharSpan]

Parameters

Returns

CharSpan or List[CharSpan]

Span(s) of the associated character or characters in the string. CharSpan are NamedTuple with:

Get the character span in the original string corresponding to given word in a sequence of the batch.

Character spans are returned as a CharSpan NamedTuple with:

Can be called as:

word_to_tokens

< source >

( batch_or_word_index: int word_index: typing.Optional[int] = None sequence_index: int = 0 ) → (TokenSpan, optional)

Parameters

Returns

(TokenSpan, optional)

Span of tokens in the encoded sequence. ReturnsNone if no tokens correspond to the word. This can happen especially when the token is a special token that has been used to format the tokenization. For example when we add a class token at the very beginning of the tokenization.

Get the encoded token span corresponding to a word in a sequence of the batch.

Token spans are returned as a TokenSpan with:

Can be called as:

This method is particularly suited when the input sequences are provided as pre-tokenized sequences (i.e. words are defined by the user). In this case it allows to easily associate encoded tokens with provided tokenized words.

words

< source >

( batch_index: int = 0 ) → List[Optional[int]]

Parameters

Returns

List[Optional[int]]

A list indicating the word corresponding to each token. Special tokens added by the tokenizer are mapped to None and other tokens are mapped to the index of their corresponding word (several tokens will be mapped to the same word index if they are parts of that word).

Return a list mapping the tokens to their actual word in the initial sentence for a fast tokenizer.

< > Update on GitHub