Home

Awesome

A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution (CVPR2022)

https://arxiv.org/abs/2203.09388

Jianqi Ma, Zhetong Liang, Lei Zhang
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China & OPPO Research

Recovering TextZoom samples

TATT visualization

Environment:

python pytorch cuda numpy

Other possible python packages like pyyaml, cv2, Pillow and imgaug

Main idea

The pipeline

<img src="./visualizations/TATT_pipeline_v2.jpg" width="720px"/>

TP Interpreter

<img src="./visualizations/TATT-TP_Interpreter.jpg" width="720px">

Configure your training

Download the pretrained recognizer from:

Aster: https://github.com/ayumiymk/aster.pytorch  
MORAN:  https://github.com/Canjie-Luo/MORAN_v2  
CRNN: https://github.com/meijieru/crnn.pytorch

Unzip the codes and walk into the 'TATT_ROOT/', place the pretrained weights from recognizer in 'TATT_ROOT/'.

Download the TextZoom dataset:

https://github.com/JasonBoy1/TextZoom

Train the corresponding model (e.g. TPGSR-TSRN):

chmod a+x train_TATT.sh
./train_TATT.sh

Run the test-prefixed shell to test the corresponding model.

Adding '--go_test' in the shell file

Cite this paper:

@inproceedings{ma2022text,
      title={A Text Attention Network for Spatial Deformation Robust Scene Text Image Super-resolution},
      author={Ma, Jianqi and Liang, Zhetong and Zhang, Lei},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
      pages={5911--5920},
      year={2022}
    }