MarkupLM (original) (raw)

PyTorch

Overview

The MarkupLM model was proposed in MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding by Junlong Li, Yiheng Xu, Lei Cui, Furu Wei. MarkupLM is BERT, but applied to HTML pages instead of raw text documents. The model incorporates additional embedding layers to improve performance, similar to LayoutLM.

The model can be used for tasks like question answering on web pages or information extraction from web pages. It obtains state-of-the-art results on 2 important benchmarks:

The abstract from the paper is the following:

Multimodal pre-training with text, layout, and image has made significant progress for Visually-rich Document Understanding (VrDU), especially the fixed-layout documents such as scanned document images. While, there are still a large number of digital documents where the layout information is not fixed and needs to be interactively and dynamically rendered for visualization, making existing layout-based pre-training approaches not easy to apply. In this paper, we propose MarkupLM for document understanding tasks with markup languages as the backbone such as HTML/XML-based documents, where text and markup information is jointly pre-trained. Experiment results show that the pre-trained MarkupLM significantly outperforms the existing strong baseline models on several document understanding tasks. The pre-trained model and code will be publicly available.

This model was contributed by nielsr. The original code can be found here.

Usage tips

drawing MarkupLM architecture. Taken from the original paper.

Usage: MarkupLMProcessor

The easiest way to prepare data for the model is to use MarkupLMProcessor, which internally combines a feature extractor (MarkupLMFeatureExtractor) and a tokenizer (MarkupLMTokenizer or MarkupLMTokenizerFast). The feature extractor is used to extract all nodes and xpaths from the HTML strings, which are then provided to the tokenizer, which turns them into the token-level inputs of the model (input_ids etc.). Note that you can still use the feature extractor and tokenizer separately, if you only want to handle one of the two tasks.

from transformers import MarkupLMFeatureExtractor, MarkupLMTokenizerFast, MarkupLMProcessor

feature_extractor = MarkupLMFeatureExtractor() tokenizer = MarkupLMTokenizerFast.from_pretrained("microsoft/markuplm-base") processor = MarkupLMProcessor(feature_extractor, tokenizer)

In short, one can provide HTML strings (and possibly additional data) to MarkupLMProcessor, and it will create the inputs expected by the model. Internally, the processor first usesMarkupLMFeatureExtractor to get a list of nodes and corresponding xpaths. The nodes and xpaths are then provided to MarkupLMTokenizer or MarkupLMTokenizerFast, which converts them to token-level input_ids, attention_mask, token_type_ids, xpath_subs_seq, xpath_tags_seq. Optionally, one can provide node labels to the processor, which are turned into token-level labels.

MarkupLMFeatureExtractor uses Beautiful Soup, a Python library for pulling data out of HTML and XML files, under the hood. Note that you can still use your own parsing solution of choice, and provide the nodes and xpaths yourself to MarkupLMTokenizer or MarkupLMTokenizerFast.

In total, there are 5 use cases that are supported by the processor. Below, we list them all. Note that each of these use cases work for both batched and non-batched inputs (we illustrate them for non-batched inputs).

Use case 1: web page classification (training, inference) + token classification (inference), parse_html = True

This is the simplest case, in which the processor will use the feature extractor to get all nodes and xpaths from the HTML.

from transformers import MarkupLMProcessor

processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")

html_string = """ ... ... ... ... Hello world ... ... ...

Welcome

...

Here is my website.

... ... """

encoding = processor(html_string, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])

Use case 2: web page classification (training, inference) + token classification (inference), parse_html=False

In case one already has obtained all nodes and xpaths, one doesn’t need the feature extractor. In that case, one should provide the nodes and corresponding xpaths themselves to the processor, and make sure to set parse_html to False.

from transformers import MarkupLMProcessor

processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False

nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] encoding = processor(nodes=nodes, xpaths=xpaths, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])

Use case 3: token classification (training), parse_html=False

For token classification tasks (such as SWDE), one can also provide the corresponding node labels in order to train a model. The processor will then convert these into token-level labels. By default, it will only label the first wordpiece of a word, and label the remaining wordpieces with -100, which is theignore_index of PyTorch’s CrossEntropyLoss. In case you want all wordpieces of a word to be labeled, you can initialize the tokenizer with only_label_first_subword set to False.

from transformers import MarkupLMProcessor

processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False

nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] node_labels = [1, 2, 2, 1] encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq', 'labels'])

Use case 4: web page question answering (inference), parse_html=True

For question answering tasks on web pages, you can provide a question to the processor. By default, the processor will use the feature extractor to get all nodes and xpaths, and create [CLS] question tokens [SEP] word tokens [SEP].

from transformers import MarkupLMProcessor

processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base")

html_string = """ ... ... ... ... Hello world ... ... ...

Welcome

...

My name is Niels.

... ... """

question = "What's his name?" encoding = processor(html_string, questions=question, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])

Use case 5: web page question answering (inference), parse_html=False

For question answering tasks (such as WebSRC), you can provide a question to the processor. If you have extracted all nodes and xpaths yourself, you can provide them directly to the processor. Make sure to set parse_html to False.

from transformers import MarkupLMProcessor

processor = MarkupLMProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False

nodes = ["hello", "world", "how", "are"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span", "html/body", "html/body/div"] question = "What's his name?" encoding = processor(nodes=nodes, xpaths=xpaths, questions=question, return_tensors="pt") print(encoding.keys()) dict_keys(['input_ids', 'token_type_ids', 'attention_mask', 'xpath_tags_seq', 'xpath_subs_seq'])

Resources

MarkupLMConfig

class transformers.MarkupLMConfig

< source >

( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 bos_token_id = 0 eos_token_id = 2 max_xpath_tag_unit_embeddings = 256 max_xpath_subs_unit_embeddings = 1024 tag_pad_id = 216 subs_pad_id = 1001 xpath_unit_hidden_size = 32 max_depth = 50 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs )

Parameters

This is the configuration class to store the configuration of a MarkupLMModel. It is used to instantiate a MarkupLM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MarkupLMmicrosoft/markuplm-base architecture.

Configuration objects inherit from BertConfig and can be used to control the model outputs. Read the documentation from BertConfig for more information.

Examples:

from transformers import MarkupLMModel, MarkupLMConfig

configuration = MarkupLMConfig()

model = MarkupLMModel(configuration)

configuration = model.config

MarkupLMFeatureExtractor

Constructs a MarkupLM feature extractor. This can be used to get a list of nodes and corresponding xpaths from HTML strings.

This feature extractor inherits from PreTrainedFeatureExtractor() which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

( html_strings ) → BatchFeature

Parameters

A BatchFeature with the following fields:

Main method to prepare for the model one or several HTML strings.

Examples:

from transformers import MarkupLMFeatureExtractor

page_name_1 = "page1.html" page_name_2 = "page2.html" page_name_3 = "page3.html"

with open(page_name_1) as f: ... single_html_string = f.read()

feature_extractor = MarkupLMFeatureExtractor()

encoding = feature_extractor(single_html_string) print(encoding.keys())

multi_html_strings = []

with open(page_name_2) as f: ... multi_html_strings.append(f.read()) with open(page_name_3) as f: ... multi_html_strings.append(f.read())

encoding = feature_extractor(multi_html_strings) print(encoding.keys())

MarkupLMTokenizer

class transformers.MarkupLMTokenizer

< source >

( vocab_file merges_file tags_dict errors = 'replace' bos_token = '' eos_token = '' sep_token = '' cls_token = '' unk_token = '' pad_token = '' mask_token = '' add_prefix_space = False max_depth = 50 max_width = 1000 pad_width = 1001 pad_token_label = -100 only_label_first_subword = True **kwargs )

Parameters

Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE). MarkupLMTokenizer can be used to turn HTML strings into to token-level input_ids, attention_mask, token_type_ids, xpath_tags_seq andxpath_tags_seq. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.

build_inputs_with_special_tokens

< source >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]

Parameters

List of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

get_special_tokens_mask

< source >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int]

Parameters

A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

create_token_type_ids_from_sequences

< source >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]

Parameters

List of zeros.

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

save_vocabulary

< source >

( save_directory: str filename_prefix: typing.Optional[str] = None )

MarkupLMTokenizerFast

class transformers.MarkupLMTokenizerFast

< source >

( vocab_file merges_file tags_dict tokenizer_file = None errors = 'replace' bos_token = '' eos_token = '' sep_token = '' cls_token = '' unk_token = '' pad_token = '' mask_token = '' add_prefix_space = False max_depth = 50 max_width = 1000 pad_width = 1001 pad_token_label = -100 only_label_first_subword = True trim_offsets = False **kwargs )

Parameters

Construct a MarkupLM tokenizer. Based on byte-level Byte-Pair-Encoding (BPE).

MarkupLMTokenizerFast can be used to turn HTML strings into to token-level input_ids, attention_mask,token_type_ids, xpath_tags_seq and xpath_tags_seq. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods.

Users should refer to this superclass for more information regarding those methods.

batch_encode_plus

< source >

( batch_text_or_text_pairs: typing.Union[typing.List[str], typing.List[typing.Tuple[str, str]], typing.List[typing.List[str]]] is_pair: typing.Optional[bool] = None xpaths: typing.Optional[typing.List[typing.List[typing.List[int]]]] = None node_labels: typing.Union[typing.List[int], typing.List[typing.List[int]], NoneType] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs )

add_special_tokens (bool, optional, defaults to True): Whether or not to add special tokens when encoding the sequences. This will use the underlyingPretrainedTokenizerBase.build_inputs_with_special_tokens function, which defines which tokens are automatically added to the input ids. This is useful if you want to add bos or eos tokens automatically. padding (bool, str or PaddingStrategy, optional, defaults to False): Activates and controls padding. Accepts the following values:

If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. stride (int, optional, defaults to 0): If set to a number along with max_length, the overflowing tokens returned whenreturn_overflowing_tokens=True will contain some tokens from the end of the truncated sequence returned to provide some overlap between truncated and overflowing sequences. The value of this argument defines the number of overlapping tokens. is_split_into_words (bool, optional, defaults to False): Whether or not the input is already pre-tokenized (e.g., split into words). If set to True, the tokenizer assumes the input is already split into words (for instance, by splitting it on whitespace) which it will tokenize. This is useful for NER or token classification. pad_to_multiple_of (int, optional): If set will pad the sequence to a multiple of the provided value. Requires padding to be activated. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability>= 7.5 (Volta). padding_side (str, optional): The side on which the model should have padding applied. Should be selected between [‘right’, ‘left’]. Default value is picked from the class attribute of the same name. return_tensors (str or TensorType, optional): If set, will return tensors instead of list of python integers. Acceptable values are:

add_special_tokens (bool, optional, defaults to True): Whether or not to encode the sequences with the special tokens relative to their model. padding (bool, str or PaddingStrategy, optional, defaults to False): Activates and controls padding. Accepts the following values:

build_inputs_with_special_tokens

< source >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]

Parameters

List of input IDs with the appropriate special tokens.

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A RoBERTa sequence has the following format:

create_token_type_ids_from_sequences

< source >

( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int]

Parameters

List of zeros.

Create a mask from the two sequences passed to be used in a sequence-pair classification task. RoBERTa does not make use of token type ids, therefore a list of zeros is returned.

encode_plus

< source >

( text: typing.Union[str, typing.List[str]] text_pair: typing.Optional[typing.List[str]] = None xpaths: typing.Optional[typing.List[typing.List[int]]] = None node_labels: typing.Optional[typing.List[int]] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None padding_side: typing.Optional[str] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True **kwargs )

Parameters

Tokenize and prepare for the model a sequence or a pair of sequences. .. warning:: This method is deprecated,__call__ should be used instead.

Given the xpath expression of one particular node (like “/html/body/div/li[1]/div/span[2]”), return a list of tag IDs and corresponding subscripts, taking into account max depth.

MarkupLMProcessor

class transformers.MarkupLMProcessor

< source >

( *args **kwargs )

Parameters

Constructs a MarkupLM processor which combines a MarkupLM feature extractor and a MarkupLM tokenizer into a single processor.

MarkupLMProcessor offers all the functionalities you need to prepare data for the model.

It first uses MarkupLMFeatureExtractor to extract nodes and corresponding xpaths from one or more HTML strings. Next, these are provided to MarkupLMTokenizer or MarkupLMTokenizerFast, which turns them into token-levelinput_ids, attention_mask, token_type_ids, xpath_tags_seq and xpath_subs_seq.

__call__

< source >

( html_strings = None nodes = None xpaths = None node_labels = None questions = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs )

This method first forwards the html_strings argument to call(). Next, it passes the nodes and xpaths along with the additional arguments to __call__() and returns the output.

Optionally, one can also provide a text argument which is passed along as first sequence.

Please refer to the docstring of the above two methods for more information.

MarkupLMModel

class transformers.MarkupLMModel

< source >

( config add_pooling_layer = True )

Parameters

The bare Markuplm Model outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( input_ids: typing.Optional[torch.LongTensor] = None xpath_tags_seq: typing.Optional[torch.LongTensor] = None xpath_subs_seq: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (MarkupLMConfig) and inputs.

The MarkupLMModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, MarkupLMModel

processor = AutoProcessor.from_pretrained("microsoft/markuplm-base") model = MarkupLMModel.from_pretrained("microsoft/markuplm-base")

html_string = " Page Title "

encoding = processor(html_string, return_tensors="pt")

outputs = model(**encoding) last_hidden_states = outputs.last_hidden_state list(last_hidden_states.shape) [1, 4, 768]

MarkupLMForSequenceClassification

class transformers.MarkupLMForSequenceClassification

< source >

( config )

Parameters

MarkupLM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( input_ids: typing.Optional[torch.Tensor] = None xpath_tags_seq: typing.Optional[torch.Tensor] = None xpath_subs_seq: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.SequenceClassifierOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (MarkupLMConfig) and inputs.

The MarkupLMForSequenceClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, AutoModelForSequenceClassification import torch

processor = AutoProcessor.from_pretrained("microsoft/markuplm-base") model = AutoModelForSequenceClassification.from_pretrained("microsoft/markuplm-base", num_labels=7)

html_string = " Page Title " encoding = processor(html_string, return_tensors="pt")

with torch.no_grad(): ... outputs = model(**encoding)

loss = outputs.loss logits = outputs.logits

MarkupLMForTokenClassification

class transformers.MarkupLMForTokenClassification

< source >

( config )

Parameters

MarkupLM Model with a token_classification head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( input_ids: typing.Optional[torch.Tensor] = None xpath_tags_seq: typing.Optional[torch.Tensor] = None xpath_subs_seq: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.MaskedLMOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (MarkupLMConfig) and inputs.

The MarkupLMForTokenClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, AutoModelForTokenClassification import torch

processor = AutoProcessor.from_pretrained("microsoft/markuplm-base") processor.parse_html = False model = AutoModelForTokenClassification.from_pretrained("microsoft/markuplm-base", num_labels=7)

nodes = ["hello", "world"] xpaths = ["/html/body/div/li[1]/div/span", "/html/body/div/li[1]/div/span"] node_labels = [1, 2] encoding = processor(nodes=nodes, xpaths=xpaths, node_labels=node_labels, return_tensors="pt")

with torch.no_grad(): ... outputs = model(**encoding)

loss = outputs.loss logits = outputs.logits

MarkupLMForQuestionAnswering

class transformers.MarkupLMForQuestionAnswering

< source >

( config )

Parameters

The Markuplm transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( input_ids: typing.Optional[torch.Tensor] = None xpath_tags_seq: typing.Optional[torch.Tensor] = None xpath_subs_seq: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (MarkupLMConfig) and inputs.

The MarkupLMForQuestionAnswering forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, MarkupLMForQuestionAnswering import torch

processor = AutoProcessor.from_pretrained("microsoft/markuplm-base-finetuned-websrc") model = MarkupLMForQuestionAnswering.from_pretrained("microsoft/markuplm-base-finetuned-websrc")

html_string = " My name is Niels " question = "What's his name?"

encoding = processor(html_string, questions=question, return_tensors="pt")

with torch.no_grad(): ... outputs = model(**encoding)

answer_start_index = outputs.start_logits.argmax() answer_end_index = outputs.end_logits.argmax()

predict_answer_tokens = encoding.input_ids[0, answer_start_index : answer_end_index + 1] processor.decode(predict_answer_tokens).strip() 'Niels'

< > Update on GitHub