欢迎,计算机科学与信息计算爱好者!

LayoutLM: Pre-training of Text and Layout for Document Image Understanding

论文 scott 3天前 7次浏览 0个评论 扫描二维码

LayoutLM: Pre-training of Text and Layout for Document Image Understanding

Abstract

Pre-training techniques have been verified successfully in a variety of NLP tasks in recent years. Despite the wide spread of pre-training models for NLP applications, they almost focused on text-level manipulation, while neglecting the layout and style information that is vital for document image understanding. In this paper, we propose LayoutLM to jointly model the interaction between text and layout information across scanned document images, which is beneficial for a great number of real-world document image understanding tasks such as information extraction from scanned documents. We also leverage the image features to incorporate the style information of words in LayoutLM. To the best of our knowledge, this is the first time that text and layout are jointly learned in a single framework for document-level pre-training, leading to significant performance improvement in downstream tasks for document image understanding.

/iclrfinalcopy

1 Introduction

Document AI, or Document Intelligence1, is a relatively new research topic that refers to the techniques to automatically read, understand and analyze business documents. Business documents are files that provide details related to a company’s internal and external transactions, which is shown in Figure 1. They may be digital-born, occurring as electronic files, or they may be in scanned form that comes from written or printed on paper. Some common examples of business documents include purchase orders, financial reports, business emails, sales agreements, vendor contracts, letters, invoices, receipts, resumes and many others. Business documents are critical to a company’s efficiency and productivity. The exact format of a business document may vary, but the information is usually presented in natural language and can be organized in a variety of ways from plain text, multi-column layouts, and a wide variety of tables/forms/figures. Understanding business documents is a very challenging task due to the diversity of layouts and formats, poor quality of scanned document images as well as the complexity of template structures.

LayoutLM: Pre-training of Text and Layout for Document Image Understanding
(a)
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
(b)
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
(c)
LayoutLM: Pre-training of Text and Layout for Document Image Understanding
(d)
Figure 1: Scanned images of business documents with different layouts and formats

Nowadays, many companies extract data from business documents through manual efforts that are time-consuming and expensive, meanwhile requiring manual customization or configuration. Rules and workflows for each type of document often need to be hard-coded and updated with changes to the specific format or when dealing with multiple formats. To address these problems, document AI models and algorithms are designed to automatically classify, extract and structuralize information from business documents, accelerating automated document processing workflows. Contemporary approaches for document AI are usually built upon deep neural networks from a computer vision perspective or a natural language processing perspective, or even a combination of them. Early attempts usually focused on detecting and analyzing certain parts of a document such as tabular areas. Hao2016ATD first proposed a table detection method for PDF documents based on Convolutional Neural Networks (CNN). After that,  (Schreiber2017DeepDeSRTDL; soto-yoo-2019-visual; Zhong2019PubLayNetLD) also leveraged more advanced Faster R-CNN model (Ren2015FasterRT) or Mask R-CNN model (DBLP:journals/corr/HeGDG17) to further improve the accuracy of document layout analysis. In addition, Yang2017LearningTE presented an end-to-end, multimodal, fully convolutional network for extracting semantic structures from document images, taking advantage of text embeddings from pre-trained NLP models. More recently, liu-etal-2019-graph introduced a Graph Convolutional Networks (GCN) based model to combine textual and visual information for information extraction from business documents. Although these models have made significant progress in the document AI area with deep neural networks, most of these methods confronted two limitations: (1) They only relied on a few human-labeled training samples, yet did not fully explore the possibility of using large-scale unlabeled training samples. (2) They usually leveraged either pre-trained CV models or NLP models, but did not consider a joint training of textual and layout information. Therefore, it is indispensable to investigate how self-supervised pre-training of text and layout may help in the document AI area.

LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Figure 2: An example of LayoutLM, where 2-D layout and image embeddings are added to the original BERT architecture

To this end, we propose LayoutLM, a simple but effective pre-training method of text and layout for document image understanding tasks. Inspired by the BERT model (devlin-etal-2019-bert), where input textual information is mainly represented by text embeddings and position embeddings, LayoutLM further adds two types of input embeddings: (1) a 2-D position embedding that denotes the relative position of a token within a document; (2) an image embedding for scanned token images within a document. The architecture of LayoutLM is shown in Figure 2. We add these two input embeddings because the 2-D position embeddings can capture the relationship among tokens within a document, meanwhile the image embedding can capture some appearance features such as font directions, types and colors. In addition, we adopt a multi-task learning objective for LayoutLM including a Masked Visual-Language Model (MVLM) loss and a Multi-label Document Classification (MDC) loss, which will further enforce the joint pre-training for text and layout. In this work, our focus is the document pre-training based on scanned document images, while digital-born documents are less challenging because they can be considered as a special case where OCR is not required, thus they are out of the scope of this paper. The LayoutLM is pre-trained on the IIT-CDIP Test Collection 1.02 (Lewis:2006:BTC:1148170.1148307), which contains more than 6 million scanned documents with 11 million scanned document images. The scanned documents are in a variety of categories, including letter, memo, email, filefolder, form, handwritten, invoice, advertisement, budget, news articles, presentation, scientific publication, questionnaire, resume, scientific report, specification and many others, which is perfect for large-scale unsupervised pre-training. We select three benchmark datasets as the downstream tasks to evaluate the performance of the pre-trained LayoutLM model. The first is the RVL-CDIP dataset3 (Harley2015EvaluationOD) for document image classification, which consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The second is the FUNSD dataset4(Jaume2019FUNSDAD) that is used for spatial layout analysis and form understanding. The FUNSD dataset contains 199 fully annotated forms with 31,485 words and 9,707 semantic entities. The third is the SROIE dataset5 for Scanned Receipts Information Extraction. The SROIE dataset contains 626 annotated receipts for training and 347 receipts for testing. Experimental results illustrate that the pre-trained LayoutLM model significantly outperforms several SOTA pre-trained models on these benchmark datasets, demonstrating the enormous advantage for pre-training of text and layout information in document image understanding tasks.

The contributions of this paper are summarized as follows:

  • For the first time, textual and layout information from scanned document images is pre-trained in a single framework.

  • LayoutLM uses the masked visual-language model and the multi-label document classification as the training objectives, which significantly outperforms several SOTA pre-trained models in document image understanding tasks.

  • The code and the pre-trained LayoutLM model will be publicly available for more downstream tasks.

2 LayoutLM

In this section, we briefly review the BERT model, and introduce how we extend to jointly model text and layout information in the LayoutLM framework.

2.1 The BERT model

The BERT model is an attention-based bidirectional language modeling approach. It has been verified that the BERT model shows effective knowledge transfer from the self-supervised task with large-scale training data.
The architecture of BERT is basically a multi-layer bidirectional Transformer encoder. It accepts a sequence of discrete tokens and stacks multiple layers to produce final representations. In details, given a set of tokens processed using WordPiece, the input embeddings are computed by summing the corresponding word embeddings, position embeddings, and segment embeddings. Then, these input embeddings are passed through a multi-layer bidirectional Transformer that can generate contextualized representations with an adaptive attention mechanism.

There are two steps in the BERT framework: pre-training and fine-tuning. During the pre-training step, the model uses two objectives to learn the language representation: Masked Language Modeling (MLM) and Next Sentence Prediction (NSP), where MLM randomly masks some of the input tokens and the objective is to recover these masked tokens, and NSP is a binary classification task taking a pair of sentences as inputs and classifying whether they are two consecutive sentences. In the fine-tuning step, task-specific datasets are used to update all parameters in an end-to-end way.

2.2 The LayoutLM Model

Although BERT-like models become the state-of-the-art techniques on several challenging NLP tasks, they usually leverage text information only for any kind of inputs. When it comes to visually rich documents, there is much more information that can be encoded into the pre-trained model. Therefore, we propose to utilize the rich information from document layouts and align them with the input texts. Basically, there are two types of features which can significantly improve the language representation in a visually rich document, which are:

Document Layout Information

It is evident that the relative positions of words in a document contribute a lot to the semantic representation. Taking form understanding as an example, given a key in a form, e.g. “Passport ID:”, its corresponding value is much more likely on its right or below instead of on the left or above. Therefore, based on the self-attention mechanism within the Transformer, embedding 2-D position features into the language representation will better align the layout information with the semantic representation.

Visual Information

Compared with the text information, the visual information is another significantly important feature in document representations. Typically, authors of a document often use some visual signals to show the importance and priority of document segments. This visual information can be represented as image features and effectively utilized in document representations. For instance, the title of a document is usually in bold or larger fonts, which symbolizes importance and high priority. Therefore, we believe that combining the image features with traditional text representations can bring much richer semantic representations to documents.

2.3 Model Architecture

To take advantage of existing pre-trained models and adapt to document image understanding tasks, we use the BERT architecture as the backbone and add two types of new input embeddings: a 2-D position embedding and an image embedding.

2-D Position Embedding

Unlike the position embedding that models the word position in a sequence, 2-D position embedding aims to model the relative spatial position in a document. To represent the spatial position of elements in scanned document images, we consider a document page as a coordinate system with the top-left origin. In this setting, the bounding box can be precisely defined by (x0, y0, x1, y1), where (x0, y0) corresponds to the position of the upper left in the bounding box, and (x1, y1) represents the position of the lower right. So we add four position embedding layers with two embedding tables. The embedding layers which represent the same dimension share the same embedding table, which means that we look up the position embedding of x0 and x1 in the embedding table and look up y0 and y1 in the table .

Image Embedding

To utilize the image feature of a document and align the image feature with the text, we add an image embedding layer to represent image features in language representation. In more detail, with the bounding box of each word from OCR results, we split the image into several pieces, and they have a one-to-one correspondence with the words. We generate the image region features with these pieces of images from a pre-trained ResNet (He2015DeepRL) image encoder as the token image embeddings. For the CLS token, we also use the pre-trained ResNet-50 to produce embeddings for the whole scanned document image to benefit the downstream tasks which need the representation of the CLS token.

2.4 Pre-training LayoutLM

Task #1: Masked Visual-Language Model

Inspired by the masked language model, we propose the Masked Visual-language Model (MVLM) to learn the language representation with the clues of 2-D position embeddings and text embeddings. During the pre-training, we randomly mask some of the input tokens but keep the 2-D position embeddings and text embeddings, then the model is trained to predict the masked tokens given the contexts. In this way, the LayoutLM model not only understands the language contexts, but also utilizes the corresponding 2-D position information, thereby bridging the gap between the visual and language modalities.

Task #2: Multi-label Document Classification

For document image understanding, many tasks require the model to generate high-quality document-level representations. As the IIT-CDIP Test Collection includes multiple tags for each document image, we also use a Multi-label Document Classification (MDC) loss during the pre-training phase. Given a set of scanned documents, we use the document tags to supervise the pre-training process so that the model can cluster the knowledge from different domains and generate better document-level representation. Since the MDC loss needs the label for each document image that may not exist for larger datasets, it is optional during the pre-training and may not be used for pre-training larger models in the future. We will compare the performance of MVLM and MVLM+MDC in Section 3.

2.5 Fine-tuning LayoutLM

The pre-trained LayoutLM model is fine-tuned on three document image understanding tasks, including a form understanding task, a receipt understanding task as well as a document image classification task. For the form and receipt understanding tasks, LayoutLM predicts {B,I,E,S,O} tags for each token and uses sequential labeling to detect each type of entity in the dataset. For the document image classification task, LayoutLM predicts the class labels using the representation of the CLS token.

3 Experiments

3.1 Pre-training Dataset

The performance of pre-trained models is largely determined by the scale and quality of datasets. Therefore, we need a large-scale scanned document image dataset to pre-train the LayoutLM model. Our model is pre-trained on the IIT-CDIP Test Collection 1.0, which contains more than 6 million documents, with more than 11 million scanned document images. Moreover, each document has its corresponding text and metadata stored in XML files. The text is the content produced by applying OCR to document images. The metadata describes the properties of the document such as the unique identity and document labels. Although the metadata contains erroneous and inconsistent tags, the scanned document images in this large-scale dataset are perfectly suitable for pre-training our model.

3.2 Fine-tuning Dataset

The FUNSD Dataset

We evaluate our approach on the FUNSD dataset for form understanding in noisy scanned documents. This dataset includes 199 real, fully annotated, scanned forms with 9,707 semantic entities and 31,485 words. These forms are organized as a list of semantic entities that are interlinked. Each semantic entity comprises a unique identifier, a label (i.e., question, answer, header or other), a bounding box, a list of links with other entities, and a list of words. The dataset is split into 149 training samples and 50 testing samples. We adopt the word-level F1 score as the evaluation metric.

The SROIE Dataset

We also evaluate our model on the SROIE dataset for receipt information extraction (Task 3). The dataset contains 626 receipts for training and 347 receipts for testing. Each receipt is organized as a list of text lines with the bounding boxes. Meanwhile, each receipt is labeled with four type of entities which are {company, date, address, total}. The evaluation metric is the exact match of the entity recognition results in F1 score.

The RVL-CDIP Dataset

The RVL-CDIP dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. The 16 classes include {letter, form, email, handwritten, advertisement, scientific report, scientific publication, specification, file folder, news article, budget, invoice, presentation, questionnaire, resume, memo}. The evaluation metric is the overall classification accuracy.

3.3 Document Pre-processing

To utilize the layout information of each document, we need to obtain the location of each token. However, the pre-training dataset (IIT-CDIP Test Collection) only contains pure texts while missing their corresponding bounding boxes. In this case, we re-process the scanned document images to obtain the necessary layout information. Like the original pre-processing in IIT-CDIP Test Collection, we similarly process the dataset by applying OCR to document images. The difference is that we obtain both the recognized words and their corresponding locations in the document image. Thanks to Tesseract6, an open-source OCR engine, we can easily obtain the recognition as well as the 2D positions. We store the OCR results in hOCR format, a standard specification format which clearly defines the OCR results of one single document image using a hierarchical representation.

3.4 Model Pre-training

We initialize the weight of LayoutLM model with the pre-trained BERT base model. Specifically, our BASE model has the same architecture: a 12-layer Transformer with 768 hidden sizes, and 12 attention heads, which contains about 113M parameters. Therefore, we use the BERT base model to initialize all module in our model except the 2D position embedding layer. For the LARGE setting, our model has a 24-layer Transformer with 1,024 hidden sizes and 16 attention heads, which is initialized by the pre-trained BERT large model and contains about 343M parameters.

In addition, we also add the 2D position embedding layers with four embedding representations: x0,y0,x1,y1, where (x0,y0) corresponds to the position of the upper left in the bounding box, and (x1,y1) represents the position of the lower right. Considering that the document layout may vary in different page size, we scale the actual coordinate to a “virtual” coordinate: the actual coordinate is scaled to have a value from 0 to 1,000.

We train our model on 8 NVIDIA Tesla V100 32GB GPUs with a total batch size of 80. The Adam optimizer is used with an initial learning rate of 5e-5 and a linear decay learning rate schedule with the warm-up. The BASE model takes roughly 80 hours to finish one epoch on 11M documents, while the LARGE model takes nearly 170 hours to finish one epoch.

3.5 Task-specific Fine-tuning

We evaluate the LayoutLM model on three different document image understanding tasks: Form Understanding, Receipt Understanding, and Document Image Classification. We follow the typical fine-tuning strategy and update all parameters in an end-to-end way on task-specific datasets.

Form Understanding

This task requires extracting and structuring the textual content of forms. It aims to extract key-value pairs from the scanned form images. In more detail, this task includes two sub-tasks: semantic labeling and semantic linking. Semantic labeling is the task of aggregating words as semantic entities and assigning pre-defined labels to them. Semantic linking is the task of predicting the relations between semantic entities. In this work, we focus on the semantic labeling task while semantic linking is out of the scope. To fine-tune LayoutLM on this task, we treat the semantic labeling task as a sequence labeling problem. We pass the final representation into a linear layer followed by a softmax layer to predict the label of each token. The model is trained for 100 epochs with a batch size of 64 and a learning rate of 5e-5.

Receipt Understanding

This task requires filling several pre-defined semantic slots according to the scanned receipt images. For instance, given a set of receipts, we fill specific slots like “company”, “address”, “date”, and “total”. Different from the form understanding task that requires labeling all matched entities and key-value pairs, the number of semantic slots is fixed with pre-defined keys. Therefore, the model only needs to predict the corresponding values using the sequence labeling method.

Document Image Classification

Given a visually rich document, this task aims to predict the corresponding category for each document image. Distinct from the existing image-based approaches, our model includes not only image representations but also text and layout information using the multimodal architecture in LayoutLM. Therefore, our model can combine the text, layout and image information in a more effective way. To fine-tune our model on this task, we pass the output of the CLS token into a linear layer to predict the category of the document. We fine-tune the model for 10 epochs with a batch size of 64 and a learning rate of 2e-5.

3.6 Results

Form Understanding

We evaluate the form understanding task on the FUNSD dataset. The experiment results are shown in Table 1. We compare the LayoutLM model with two SOTA pre-trained NLP models: BERT and RoBERTa (Liu2019RoBERTaAR). The BERT base model achieves 0.603 and while the large model achieves 0.656 in F1. Compared to BERT, the RoBERTa performs much better on this dataset as it is trained using larger data with more epochs. Due to the time limitation, we present 4 settings for LayoutLM, which are 500K document pages with 6 epochs, 1M with 6 epochs, 2M with 6 epochs as well as 11M with 2 epochs. It is observed that the LayoutLM model substantially outperforms existing SOTA pre-training baselines. With the BASE architecture, the LayoutLM model with 11M training data achieves 0.7866 in F1, which is much higher than BERT and RoBERTa with the similar size of parameters. In addition, we also add the MDC loss in the pre-training step while it does bring substantial improvements on the FUNSD dataset. Since we have not included the image features in this setting yet, there is still a lot of room to further improve the performance.

Modality Model Precision Recall F1 #Parameters
Text only 0.5469 0.671 0.6026 110M
0.6349 0.6975 0.6648 125M
0.6113 0.7085 0.6563 340M
0.678 0.7391 0.7072 355M



Text + Layout


MVLM

 (500K, 6 epochs) 0.665 0.7355 0.6985 113M
 (1M, 6 epochs) 0.6909 0.7735 0.7299 113M
 (2M, 6 epochs) 0.7377 0.782 0.7592 113M
 (11M, 2 epochs) 0.7597 0.8155 0.7866 113M



Text + Layout


MVLM+MDC

 (1M, 6 epochs) 0.7076 0.7695 0.7372 113M
 (11M, 1 epoch) 0.7194 0.7780 0.7475 113M



Text + Layout


MVLM

 (1M, 6 epochs) 0.7171 0.805 0.7585 343M
 (11M, 1 epoch) 0.7536 0.806 0.7789 343M



Text + Layout + Image


MVLM

 (1M, 6 epochs)
 (11M, 2 epochs)
Table 1: Model accuracy (Precision, Recall, F1) on the FUNSD dataset
# Pre-training Data # Pre-training Epochs Precision Recall F1
500K 1 epoch 0.5779 0.6955 0.6313
2 epochs 0.6217 0.705 0.6607
3 epochs 0.6304 0.718 0.6713
4 epochs 0.6383 0.7175 0.6756
5 epochs 0.6568 0.734 0.6933
6 epochs 0.665 0.7355 0.6985
1M 1 epoch 0.6156 0.7005 0.6552
2 epochs 0.6545 0.737 0.6933
3 epochs 0.6794 0.762 0.7184
4 epochs 0.6812 0.766 0.7211
5 epochs 0.6863 0.7625 0.7224
6 epochs 0.6909 0.7735 0.7299
2M 1 epoch 0.6599 0.7355 0.6957
2 epochs 0.6938 0.759 0.7249
3 epochs 0.6915 0.7655 0.7266
4 epochs 0.7081 0.781 0.7427
5 epochs 0.7228 0.7875 0.7538
6 epochs 0.7377 0.782 0.7592
11M 1 epoch 0.7464 0.7815 0.7636
2 epochs 0.7597 0.8155 0.7866
Table 2:  (Text + Layout, MVLM) accuracy with different data and epochs on the FUNSD dataset

In addition, we also evaluate the LayoutLM model with different data and epochs on the FUNSD dataset, which is shown in Table 2. For different data settings, we can see that the overall accuracy is monotonically increased as more epochs are trained during the pre-training step. Furthermore, the accuracy is also improved as more data is fed into the LayoutLM model. As the FUNSD dataset contains only 149 images for fine-tuning, the experiment results confirmed that the pre-training of text and layout is effective for scanned document understanding especially the low resource setting.

Furthermore, we compare different initialization methods for the LayoutLM model with BERT and RoBERTa. The experiment results in Table 3 show that the LayoutLM BASE model initialized with RoBERTa outperforms BERT by 2.1 points in F1. We will pre-train more models with RoBERTa as the initialization in the future, especially for the LARGE settings.

Initialization Model Precision Recall F1
 (1M, 6 epochs) 0.6909 0.7735 0.7299
 (1M, 6 epochs) 0.7173 0.7888 0.7514
 (11M, 1 epoch) 0.7536 0.806 0.7789
 (11M, 1 epoch)
Table 3: Different initialization methods for  (Text + Layout, MVLM)

Receipt Understanding

We evaluate the receipt understanding task using the SROIE dataset. The results are shown in Table LABEL:tab:4. As we only test the performance of the Key Information Extraction task in SROIE, we would like to eliminate the effect of incorrect OCR results. Therefore, we pre-process the training data by using the groundtruth OCR and run a set of experiments using the baseline models (BERT & RoBERTa) as well as the LayoutLM model. The experiment results show that the LayoutLM LARGE model trained with 11M document images achieve an F1 score of 0.9524, which is significantly better than the first place in the competition leaderboard. This result also verifies that the pre-trained LayoutLM not only performs well on the in-domain dataset (FUNSD) but also outperforms several strong baselines on the out-of-domain dataset like SROIE.

https://www.groundai.com/project/layoutlm-pre-training-of-text-and-layout-for-document-image-understanding/


CSIT FUN , 版权所有丨如未注明 , 均为原创丨本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:LayoutLM: Pre-training of Text and Layout for Document Image Understanding
喜欢 (0)
[985016145@qq.com]
分享 (0)
发表我的评论
取消评论
表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址