Paper page - LayoutXLM: Multimodal Pre-training for Multilingual Visually-rich

Document Understanding (original) (raw)

Published on Apr 18, 2021

Abstract

LayoutXLM, a multimodal pre-trained model for multilingual document understanding, outperforms existing SOTA cross-lingual models on the XFUND dataset, which includes form understanding samples in multiple languages.

Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually-rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. In this paper, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually-rich document understanding. To accurately evaluateLayoutXLM, we also introduce a multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese), and key-value pairs are manually labeled for each language. Experiment results show that the LayoutXLM model has significantly outperformed the existing SOTAcross-lingual pre-trained models on the XFUND dataset. The pre-trainedLayoutXLM model and the XFUND dataset are publicly available at https://aka.ms/[layoutxlm](/papers?q=layoutxlm).

View arXiv page View PDF Add to collection

Get this paper in your agent:

hf papers read 2104.08836

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

microsoft/layoutxlm-base Updated Sep 16, 2022 • 7.35k • 74

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2104.08836 in a dataset README.md to link it from this page.

Spaces citing this paper 4

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.