Home

Awesome

Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features (ECCV 2022)

| paper | slide | poster |

Official PyTorch implementation for Multi-modal Text Recognition Networks: Interactive Enhancements between Visual and Semantic Features (MATRN) in ECCV 2022.

Byeonghu Na, Yoonsik Kim, and Sungrae Park

This paper introduces a novel method, called Multi-modAl Text Recognition Network (MATRN), that enables interactions between visual and semantic features for better recognition performances.

<img src="./figures/overview.png" width="1000" title="overview" alt="An overview of MATRN. A visual feature extractor and an LM extract visual and semantic features, respectively. By utilizing the attention map, representing relations between visual features and character positions, MATRNs encode spatial information into the semantic features and hide visual features related to a randomly selected character. Through the multi-modal feature enhancement module, visual and semantic features interact with each other and the enhanced features in two modalities are fused to finalize the output sequence.">

Datasets

We use lmdb dataset for training and evaluation dataset. The datasets can be downloaded in clova (for validation and evaluation) and ABINet (for training and evaluation).

Requirements

pip install torch==1.7.1 torchvision==0.8.2 fastai==1.0.60 lmdb pillow opencv-python tensorboardX editdistance

Pretrained Models

ModelIIITSVTIC13<sub>S</sub>IC13<sub>L</sub>IC15<sub>S</sub>IC15<sub>L</sub>SVTPCUTE
MATRN96.794.997.995.886.682.990.594.1

Training and Evaluation

python main.py --config=configs/train_matrn.yaml
python main.py --config=configs/train_matrn.yaml --phase test --image_only

Additional flags:

Acknowledgements

This implementation has been based on ABINet.

Citation

Please cite this work in your publications if it helps your research.

@inproceedings{na2022multi,
 title={Multi-modal text recognition networks: Interactive enhancements between visual and semantic features},
 author={Na, Byeonghu and Kim, Yoonsik and Park, Sungrae},
 booktitle={European Conference on Computer Vision},
 pages={446--463},
 year={2022},
 organization={Springer}
}