Awesome
RADN
[CVPRW 2021] Code for Region-Adaptive Deformable Network for Image Quality Assessment
Overview
<p align="center"> <img src="Figures/architecture.png" width="100%"> </p>Update
[2021/5/7] add codes for WResNet (our baseline).
[2021/5/29] add codes for RADN.
Instruction
-
run
mkdir.sh
to create necessary directories. -
use
sh train.sh
orsh test.sh
to train or test the model. You can also change the options in the shell files as you like.
The pretrained models can be found at this URL.
Please note that the performance on the challenge leaderboard is obtained by ensembling and the checkpoint above is for the single model.
Note: Due to the instability of deformable convolution and self-attention in training, if there exist some problems during the training of RADN, don’t worry, you can try to load baseline weights to initialize RADN to achieve stable training and rapid convergence.
Performance
Scatter Plots
<p align="center"> <img src="Figures/scatter_plots.png" width="85%"> </p>Attention Maps
<p align="center"> <img src="Figures/attention_maps.png" width="75%"> </p>TODO (If I have free time)
- Release the checkpoint of RADN
- Simplify the code
- etc.
Acknowledgment
The codes borrow heavily from WaDIQaM implemented by Dingquan Li and we really appreciate it.
Citation
If you find our work or code helpful for your research, please consider to cite:
@inproceedings{RADN2021ntire,
title={Region-Adaptive Deformable Network for Image Quality Assessment},
author={Shuwei Shi and Qingyan Bai and Mingdeng Cao and Weihao Xia and Jiahao Wang and Yifan Chen and Yujiu Yang},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
year={2021}
}