Home

Awesome

LayoutReader

<p align="center"> 🤗 <a href="https://huggingface.co/hantian/layoutreader">Hugging Face</a> </p> <p align="center"> <img src="./example/page_0.png" width="400" alt="page_0"/> <img src="./example/page_1.png" width="400" alt="page_1"/> </p>

Why this repo?

The original LayoutReader is published by Microsoft Research. It is based on LayoutLM, and use a seq2seq architecture to predict the reading order of the words in a document. There are several problems with the original repo:

  1. Because it doesn't use transformers, there are lots of experiments in the code, and the code is not well-organized. It's hard to train and deploy.
  2. seq2seq is too slow in production, I want to get the all predictions in one pass.
  3. The pre-trained model's input is English word-level, but it's not the real case. The real inputs should be the spans extracted by PDF parser or OCR.
  4. I want a multilingual model. I notice only use the bbox is only a little bit worse than bbox+text, so I want to train a model only use bbox, ignore the text.

What I did?

  1. Refactor the codes, use LayoutLMv3ForTokenClassification of transformers to train and eval.
  2. Offer a script turn the original word-level dataset into span-level dataset.
  3. Implement a better post-processor to avoid duplicate predictions.
  4. Release a pre-trained model fine-tuned from layoutlmv3-large

How to use?

from transformers import LayoutLMv3ForTokenClassification
from v3.helpers import prepare_inputs, boxes2inputs, parse_logits

model = LayoutLMv3ForTokenClassification.from_pretrained("hantian/layoutreader")

# list of [left, top, right, bottom], bboxes of spans, should be range from 0 to 1000
boxes = [[...], ...]  
inputs = boxes2inputs(boxes)
inputs = prepare_inputs(inputs, model)
logits = model(**inputs).logits.cpu().squeeze(0)
orders = parse_logits(logits, len(boxes))
print(orders)

# [0, 1, 2, ...]

Or you can python main.py to serve the model.

Dataset

Download Original Dataset

The original dataset can download from ReadingBank. More details can be found in the original repo.

Build Span-Level Dataset

unzip ReadingBank.zip
python tools.py ./train/ train.jsonl.gz
python tools.py ./dev/ dev.jsonl.gz
python tools.py ./test/ test.jsonl.gz --src-shuffle-rate=0
python tools.py ./test/ test_shuf.jsonl.gz --src-shuffle-rate=1

Train & Eval

The core codes are in ./v3 folder. The train.sh and eval.py are the entrypoints.

bash train.sh
python eval.py ../test.jsonl.gz hantian/layoutreader
python eval.py ../test_shuf.jsonl.gz hantian/layoutreader

Span-Level Results

  1. shuf means whether the input order is shuffled.
  2. BlEU Idx is the BLEU score of predicted tokens' orders.
  3. BLEU Text is the BLEU score of final merged text.

I only train the layout only model. And test on the span-level dataset. So the Heuristic Method result is quite different from the original word-level result. I mainly focus on the BLEU Text, it's only a bit lower than the original word-level result. But the speed is much faster.

MethodshufBLEU IdxBLEU Text
Heuristic Methodno44.470.7
LayoutReader (layout only)no94.997.5
LayoutReader (layout only)yes94.897.4

Word-Level Results

My eval script

The layout only model is trained by myself using the original codes, and the public model is the pre-trained model. The layout only is nearly as good as the public model, and the shuf only has a little effect on the results.

Only test the first part of test dataset. Because it's too slow...

MethodshufBLEU IdxBLEU Text
Heuristic Methodno78.379.4
LayoutReader (layout only)no98.098.2
LayoutReader (layout only)yes97.898.0
LayoutReader (public model)no98.098.3

Old eval script (copy from original paper)

MethodEncoderBLEUARD
Heuristic Method-0.69728.46
LayoutReader (layout only)LayoutLM (layout only)0.97322.31
LayoutReaderLayoutLM0.98191.75
MethodBLEUBLEUBLEUARDARDARD
r=100%r=50%r=0%r=100%r=50%r=0%
LayoutReader (layout only)0.97010.97290.97322.852.612.31
LayoutReader0.97650.97880.98192.502.241.75
MethodBLEUBLEUBLEUARDARDARD
r=100%r=50%r=0%r=100%r=50%r=0%
LayoutReader (layout only)0.97180.97140.13312.722.82105.40
LayoutReader0.97720.97700.17832.482.4672.94

Citation

If this model helps you, please cite it.

@software{Pang_Faster_LayoutReader_based_2024,
  author = {Pang, Hantian},
  month = feb,
  title = {{Faster LayoutReader based on LayoutLMv3}},
  url = {https://github.com/ppaanngggg/layoutreader},
  version = {1.0.0},
  year = {2024}
}