LayoutXLM · Hugging Face (original) (raw)

This model was released on 2021-04-18 and added to Hugging Face Transformers on 2021-11-03.

PyTorch

Overview

LayoutXLM was proposed in LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich Document Understanding by Yiheng Xu, Tengchao Lv, Lei Cui, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Furu Wei. It’s a multilingual extension of the LayoutLMv2 model trained on 53 languages.

The abstract from the paper is the following:

Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. To accurately evaluate LayoutXLM, we also introduce a multilingual form understanding benchmark dataset named XFUN, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUN dataset.

This model was contributed by nielsr. The original code can be found here.

Usage tips and examples

One can directly plug in the weights of LayoutXLM into a LayoutLMv2 model, like so:

from transformers import LayoutLMv2Model

model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")

Note that LayoutXLM has its own tokenizer, based onLayoutXLMTokenizer/LayoutXLMTokenizerFast. You can initialize it as follows:

from transformers import LayoutXLMTokenizer

tokenizer = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base")

Similar to LayoutLMv2, you can use LayoutXLMProcessor (which internally appliesLayoutLMv2ImageProcessor andLayoutXLMTokenizer/LayoutXLMTokenizerFast in sequence) to prepare all data for the model.

As LayoutXLM’s architecture is equivalent to that of LayoutLMv2, one can refer to LayoutLMv2’s documentation page for all tips, code examples and notebooks.

LayoutXLMConfig

class transformers.LayoutXLMConfig

< source >

( transformers_version: str | None = None architectures: list[str] | None = None output_hidden_states: bool | None = False return_dict: bool | None = True dtype: typing.Union[str, ForwardRef('torch.dtype'), NoneType] = None chunk_size_feed_forward: int = 0 is_encoder_decoder: bool = False id2label: dict[int, str] | dict[str, str] | None = None label2id: dict[str, int] | dict[str, str] | None = None problem_type: typing.Optional[typing.Literal['regression', 'single_label_classification', 'multi_label_classification']] = None vocab_size: int = 30522 hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: str = 'gelu' hidden_dropout_prob: float | int = 0.1 attention_probs_dropout_prob: float | int = 0.1 max_position_embeddings: int = 512 type_vocab_size: int = 2 initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 pad_token_id: int | None = 0 max_2d_position_embeddings: int = 1024 max_rel_pos: int = 128 rel_pos_bins: int = 32 fast_qkv: bool = True max_rel_2d_pos: int = 256 rel_2d_pos_bins: int = 64 convert_sync_batchnorm: bool = True image_feature_pool_shape: list[int] | tuple[int, ...] = (7, 7, 256) coordinate_size: int = 128 shape_size: int = 128 has_relative_attention_bias: bool = True has_spatial_attention_bias: bool = True has_visual_segment_embedding: bool = False detectron2_config_args: dict | None = None )

Parameters

This is the configuration class to store the configuration of a LayoutxlmModel. It is used to instantiate a Layoutxlm model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the microsoft/layoutxlm-base

Configuration objects inherit from PreTrainedConfig and can be used to control the model outputs. Read the documentation from PreTrainedConfig for more information.

Example:

from transformers import LayoutXLMConfig, LayoutXLMModel

configuration = LayoutXLMConfig()

model = LayoutXLMModel(configuration)

configuration = model.config

LayoutXLMTokenizer

class transformers.LayoutXLMTokenizer

< source >

( vocab: str | list | None = None bos_token = '' eos_token = '' sep_token = '' cls_token = '' unk_token = '' pad_token = '' mask_token = '' cls_token_box = [0, 0, 0, 0] sep_token_box = [1000, 1000, 1000, 1000] pad_token_box = [0, 0, 0, 0] pad_token_label = -100 only_label_first_subword = True add_prefix_space = True **kwargs )

Parameters

Construct a “fast” LayoutXLM tokenizer (backed by HuggingFace’s tokenizers library). Adapted fromRobertaTokenizer and XLNetTokenizer. Based onBPE.

This tokenizer inherits from TokenizersBackend which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

__call__

< source >

( text: str | list[str] | list[list[str]] text_pair: list[str] | list[list[str]] | None = None boxes: list[list[int]] | list[list[list[int]]] | None = None word_labels: list[int] | list[list[int]] | None = None add_special_tokens: bool = True padding: bool | str | transformers.utils.generic.PaddingStrategy = False truncation: bool | str | transformers.tokenization_utils_base.TruncationStrategy = None max_length: int | None = None stride: int = 0 pad_to_multiple_of: int | None = None padding_side: str | None = None return_tensors: str | transformers.utils.generic.TensorType | None = None return_token_type_ids: bool | None = None return_attention_mask: bool | None = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding

Parameters

A BatchEncoding with the following fields:

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences with word-level normalized bounding boxes and optional labels.

build_inputs_with_special_tokens

< source >

( token_ids_0: list token_ids_1: list[int] | None = None ) → list[int]

Parameters

List of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM-RoBERTa sequence has the following format:

get_special_tokens_mask

< source >

( token_ids_0: list[int] token_ids_1: list[int] | None = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1]

Parameters

Returns

A list of integers in the range [0, 1]

1 for a special token, 0 for a sequence token.

Retrieve sequence ids from a token list that has no special tokens added.

For fast tokenizers, data collators call this with already_has_special_tokens=True to build a mask over an already-formatted sequence. In that case, we compute the mask by checking membership in all_special_ids.

create_token_type_ids_from_sequences

< source >

( token_ids_0: list token_ids_1: list[int] | None = None ) → list[int]

Parameters

List of zeros.

Create a mask from the two sequences passed to be used in a sequence-pair classification task. XLM-RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

save_vocabulary

< source >

( save_directory: str filename_prefix: str | None = None )

LayoutXLMTokenizerFast

class transformers.LayoutXLMTokenizer

< source >

( vocab: str | list | None = None bos_token = '' eos_token = '' sep_token = '' cls_token = '' unk_token = '' pad_token = '' mask_token = '' cls_token_box = [0, 0, 0, 0] sep_token_box = [1000, 1000, 1000, 1000] pad_token_box = [0, 0, 0, 0] pad_token_label = -100 only_label_first_subword = True add_prefix_space = True **kwargs )

Parameters

Construct a “fast” LayoutXLM tokenizer (backed by HuggingFace’s tokenizers library). Adapted fromRobertaTokenizer and XLNetTokenizer. Based onBPE.

This tokenizer inherits from TokenizersBackend which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

__call__

< source >

( text: str | list[str] | list[list[str]] text_pair: list[str] | list[list[str]] | None = None boxes: list[list[int]] | list[list[list[int]]] | None = None word_labels: list[int] | list[list[int]] | None = None add_special_tokens: bool = True padding: bool | str | transformers.utils.generic.PaddingStrategy = False truncation: bool | str | transformers.tokenization_utils_base.TruncationStrategy = None max_length: int | None = None stride: int = 0 pad_to_multiple_of: int | None = None padding_side: str | None = None return_tensors: str | transformers.utils.generic.TensorType | None = None return_token_type_ids: bool | None = None return_attention_mask: bool | None = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs ) → BatchEncoding

Parameters

A BatchEncoding with the following fields:

Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences with word-level normalized bounding boxes and optional labels.

LayoutXLMProcessor

class transformers.LayoutXLMProcessor

< source >

( image_processor = None tokenizer = None **kwargs )

Parameters

Constructs a LayoutXLMProcessor which wraps a image processor and a tokenizer into a single processor.

LayoutXLMProcessor offers all the functionalities of LayoutLMv2ImageProcessor and LayoutXLMTokenizer. See the~LayoutLMv2ImageProcessor and ~LayoutXLMTokenizer for more information.

__call__

< source >

( images text: str | list[str] | list[list[str]] = None text_pair: list[str] | list[list[str]] | None = None boxes: list[list[int]] | list[list[list[int]]] | None = None word_labels: list[int] | list[list[int]] | None = None add_special_tokens: bool = True padding: bool | str | transformers.utils.generic.PaddingStrategy = False truncation: bool | str | transformers.tokenization_utils_base.TruncationStrategy = None max_length: int | None = None stride: int = 0 pad_to_multiple_of: int | None = None return_token_type_ids: bool | None = None return_attention_mask: bool | None = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True return_tensors: str | transformers.utils.generic.TensorType | None = None **kwargs ) → ~tokenization_utils_base.BatchEncoding

Parameters

Returns

~tokenization_utils_base.BatchEncoding

Update on GitHub