Pre-tokenizers (original) (raw)

BertPreTokenizer

class tokenizers.pre_tokenizers.BertPreTokenizer

( )

BertPreTokenizer

This pre-tokenizer splits tokens on spaces, and also on punctuation. Each occurence of a punctuation character will be treated separately.

ByteLevel

class tokenizers.pre_tokenizers.ByteLevel

( add_prefix_space = True use_regex = True )

Parameters

ByteLevel PreTokenizer

This pre-tokenizer takes care of replacing all bytes of the given string with a corresponding representation, as well as splitting into words.

alphabet

( ) → List[str]

A list of characters that compose the alphabet

Returns the alphabet used by this PreTokenizer.

Since the ByteLevel works as its name suggests, at the byte level, it encodes each byte value to a unique visible character. This means that there is a total of 256 different characters composing this alphabet.

CharDelimiterSplit

class tokenizers.pre_tokenizers.CharDelimiterSplit

( )

This pre-tokenizer simply splits on the provided char. Works like .split(delimiter)

Digits

class tokenizers.pre_tokenizers.Digits

( individual_digits = False )

Parameters

This pre-tokenizer simply splits using the digits in separate tokens

If set to True, digits will each be separated as follows:

"Call 123 please" -> "Call ", "1", "2", "3", " please"

If set to False, digits will grouped as follows:

"Call 123 please" -> "Call ", "123", " please"

Metaspace

class tokenizers.pre_tokenizers.Metaspace

( replacement = '_' prepend_scheme = 'always' split = True )

Parameters

Metaspace pre-tokenizer

This pre-tokenizer replaces any whitespace by the provided replacement character. It then tries to split on these spaces.

PreTokenizer

class tokenizers.pre_tokenizers.PreTokenizer

( )

Base class for all pre-tokenizers

This class is not supposed to be instantiated directly. Instead, any implementation of a PreTokenizer will return an instance of this class when instantiated.

pre_tokenize

( pretok )

Parameters

Pre-tokenize a ~tokenizers.PyPreTokenizedString in-place

This method allows to modify a PreTokenizedString to keep track of the pre-tokenization, and leverage the capabilities of thePreTokenizedString. If you just want to see the result of the pre-tokenization of a raw string, you can usepre_tokenize_str()

pre_tokenize_str

( sequence ) → List[Tuple[str, Offsets]]

Parameters

Returns

List[Tuple[str, Offsets]]

A list of tuple with the pre-tokenized parts and their offsets

Pre tokenize the given string

This method provides a way to visualize the effect of aPreTokenizer but it does not keep track of the alignment, nor does it provide all the capabilities of thePreTokenizedString. If you need some of these, you can usepre_tokenize()

Punctuation

class tokenizers.pre_tokenizers.Punctuation

( behavior = 'isolated' )

Parameters

This pre-tokenizer simply splits on punctuation as individual characters.

Sequence

class tokenizers.pre_tokenizers.Sequence

( pretokenizers )

This pre-tokenizer composes other pre_tokenizers and applies them in sequence

Split

class tokenizers.pre_tokenizers.Split

( pattern behavior invert = False )

Parameters

Split PreTokenizer

This versatile pre-tokenizer splits using the provided pattern and according to the provided behavior. The pattern can be inverted by making use of the invert flag.

UnicodeScripts

class tokenizers.pre_tokenizers.UnicodeScripts

( )

This pre-tokenizer splits on characters that belong to different language family It roughly follows https://github.com/google/sentencepiece/blob/master/data/Scripts.txtActually Hiragana and Katakana are fused with Han, and 0x30FC is Han too. This mimicks SentencePiece Unigram implementation.

Whitespace

class tokenizers.pre_tokenizers.Whitespace

( )

This pre-tokenizer simply splits using the following regex: \w+|[^\w\s]+

WhitespaceSplit

class tokenizers.pre_tokenizers.WhitespaceSplit

( )

This pre-tokenizer simply splits on the whitespace. Works like .split()