Home

Awesome

VCRNet

Guided by the free-energy principle, generative adversarial networks (GAN)-based no-reference image quality assessment (NR-IQA) methods have improved the image quality prediction accuracy. However, the GAN cannot well handle the restoration task for the free-energy principle-guided NR-IQA methods, especially for the severely destroyed images, which results in that the quality degradation relationship between the distorted image and its restored image cannot be accurately built.To address this problem, a visual compensation restoration network (VCRNet)-based NR-IQA method is proposed, which uses a non-adversarial model to efficiently handle the distorted image restoration task. The proposed VCRNet consists of a visual restoration network and a quality estimation network. ./image-20211022140814450

Dataset

DatasetLinks
LIVEhttps://live.ece.utexas.edu/research/quality/index.htm
TID2013http://r0k.us/graphics/kodak/
KONIQ-10Khttp://database.mmsp-kn.de/koniq-10k-database.html
CSIQhttps://pan.baidu.com/s/1XCSafnf3SlbgePJuMq5M5w pass: w7dh

Training and Testing

CUDA_VISIBLE_DEVICES=0 python main.py --mode train --dataset live

Note

pip install efficientnet_pytorch

Requirements

Citation

If you find our paper or code useful for your research, please cite:

@ARTICLE{9694502,
  author={Pan, Zhaoqing and Yuan, Feng and Lei, Jianjun and Fang, Yuming and Shao, Xiao and Kwong, Sam},
  journal={IEEE Transactions on Image Processing}, 
  title={VCRNet: Visual Compensation Restoration Network for No-Reference Image Quality Assessment}, 
  year={2022},
  volume={31},
  number={},
  pages={1613-1627},
  doi={10.1109/TIP.2022.3144892}}