Awesome
CTRNet
This repository is the implementation of "Don't Forget Me: Accurate Background Recovery for Text Removal via Modeling Local-Global Context". paper supple
The inference codes are available.
We have updated our retrained model weights on Jul 27. You can download it Here.
For any questions, please email to me. Thank you for your interest.
Environment
My environment can be refered as follows:
- Python 3.6.9
- PyTorch 1.3 (1.3+ is also work)
- Inplace_Abn (Only for training, the pretrained model can be downloaded at ASL, which is trained on OpenImage)
- torchlight
- Polygon
- shapely
- skimage
Install torchlight
cd ./torchlight
python setup.py install
Datasets
We use SCUT-EnsText and SCUT-Syn.
After downloading, run flist.py
to generate data lists.
mkdir datasets
python flist.py --path path_to_enstext_test_set --output ./datasets/enstext_test.flist
All the images are set to 512 * 512. The strucuture images for LCG block are generated by the official code in RTV methods. You can generate the data yourselves, and we will also provide the test data here. data.
<!-- ## Training Create an new directory and place the pretrain weights for [TResNet_L](https://github.com/Alibaba-MIIL/ASL/blob/main/MODEL_ZOO.md) on OpenImage and our [Structure generator](https://github.com/Alibaba-MIIL/ASL/blob/main/MODEL_ZOO.md). You can also train the structure generator from scratch, but you should modify some codes in this project. ```bash CUDA_VISIBLE_DEVICES=0,1 python train.py \ --bs 2 --gpus 2 --prefix CTRNet \ --img_flist your/train/flist/of/paris \ --TRresNet_path path/of/ASL/weight \ --nEpochs 150 ``` -->Testing
For generating the results with text removal, the commond is as follows:
CUDA_VISIBLE_DEVICES=0 python test.py \
--bs 1 --gpus 1 --prefix CTRNet \
--img_flist your/test/flist/ \
--model your/model/weights --save_path ./results --save \
The PSNR is calculated with skimage.metrics.peak_signal_noise_ratio.
Acknowledge
The repository is benefit a lot from SPL and DETR. Thanks a lot for their excellent work.
Citation
If you find our method or dataset useful for your reserach, please cite:
@ARTICLE{CTRNet,
author ={Liu, Chongyu and Jin, Lianwen and Liu, Yuliang and Luo, canjie and Chen, Bangdong and Guo, Fengjun and Ding, Kai},
journal ={ECCV},
title ={Don’t Forget Me: Accurate Background Recovery for Text Removal via Modeling Local-Global Context},
year ={2022},}
Feedback
Suggestions and opinions of our work (both positive and negative) are welcome. Please contact the authors by sending email to Chongyu Liu(liuchongyu1996@gmail.com). For commercial usage, please contact Prof. Lianwen Jin via (eelwjin@scut.edu.cn).