Awesome
FEQE
Official implementation for Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks, ECCV workshop 2018
Citation
Please cite our project if it is helpful for your research
@InProceedings{Vu_2018_ECCV_Workshops},
author = {Vu, Thang and Van Nguyen, Cao and Pham, Trung X. and Luu, Tung M. and Yoo, Chang D.},
title = {Fast and Efficient Image Quality Enhancement via Desubpixel Convolutional Neural Networks},
booktitle = {The European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}
<p align="center">
<img src="https://github.com/thangvubk/FEQE/blob/master/docs/P_results.PNG">
</p>
<p align="center">
Comparison of proposed FEQE with other state-of-the-art super-resolution and enhancement methods
</p>
<p align="center">
<img src="https://github.com/thangvubk/FEQE/blob/master/docs/net.PNG">
</p>
<p align="center">
Network architecture
</p>
<p align="center">
<img src="https://github.com/thangvubk/FEQE/blob/master/docs/sub-des.PNG">
</p>
<p align="center">
Proposed desubpixel
</p>
PIRM 2018 challenge results (super-resolution on mobile devices task)
<p align="center"> <img src="https://github.com/thangvubk/FEQE/blob/master/docs/PIRM.PNG"> </p> <p align="center"> TEAM_ALEX placed the first in overall benchmark score. Refer to <a href="http://ai-benchmark.com/challenge.html">PIRM 2018</a> for details. </p>Dependencies
- 1 Nvidia GPU (4h training on Titan Xp)
Python3
tensorflow 1.10+
tensorlayer 1.9+
tensorboardX 1.4+
Download datasets, models, and results
Dataset
- Train: DIV2K (800 2K-resolution images)
- Valid: DIV2K (9 val images)
- Test: Set5, Set14, B100, Urban100
- Download train+val+test datasets
- Download test only dataset
Pretrained models
- Download pretrained models including 1 PSNR-optimized model and 1 perception-optimized model
- Download pretrained VGG used for VGG loss
Paper results
- Download paper results (images) of the test datasets
Project layout (recommended)
FEQE/
├── checkpoint
│ ├── FEQE
│ └── FEQE-P
├── data
│ ├── DIV2K_train_HR
│ ├── DIV2K_valid_HR_9
│ └── test_benchmark
├── docs
├── model
├── results
└── vgg_pretrained
└── imagenet-vgg-verydeep-19.mat
Quick start
- Download test only dataset dataset and put into
data/
directory - Download pretrained models and put into
checkpoint/
directory - Run
python test.py --dataset <DATASET_NAME>
- Results will be saved into
results/
directory
Training
- Download train+val+test datasets dataset and put into
data/
directory - Download pretrained VGG and put into
vgg_pretrained/
directory - Pretrain with MSE loss on scale 2:
python train.py --checkpoint checkpoint/mse_s2 --alpha_vgg 0 --scale 2 --phase pretrain
- Finetune with MSE loss on scale 4 (FEQE-P):
python main.py --checkpoint checkpoint/mse_s4 --alpha_vgg 0 --pretrained_model checkpoint_test/mse_s2/model.ckpt
- Finetune with full loss on scale 4:
python main.py --checkpoint checkpoint/full_s4 ---pretrained_model checkpoint_test/mse_s4/model.ckpt
- All Models with be saved into
checkpoint/
direcory
Visualization
- Start tensorboard:
tensorboard --logdir checkpoint
- Enter:
YOUR_IP:6006
to your web browser. - Result ranges should be similar to:
Comprehensive testing
- Test FEQE model (defaults): follow Quick start
- Test FEQE-P model:
python test.py --dataset <DATASET> --model_path <FEQE-P path>
- Test perceptual quality: refer to PIRM validation code