Awesome
VCISR: Blind Single Image Super-Resolution with Video Compression Synthetic Data (WACV 2024)
<br> :star:If you like VCISR, please help star this repo. Thanks!:hugs:
👀 My new paper on Anime based on VCISR: https://github.com/Kiteretsu77/APISR
:book:Table Of Contents
<a name="update"></a>Update
- 2024.03.02: Publish v1.0 released version.
- 2023.12.08: The pre-trained weight is released.
- 2023.11.29: This repo is released.
<a name="installation"></a> Installation (Environment Preparation)
git clone git@github.com:Kiteretsu77/VCISR-official.git
cd VCISR
# Create conda env
conda create -n VCISR python=3.10
conda activate VCISR
# Install Pytorch we use torch.compile in our repository by default
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
# Install FFMPEG (the following is for linux system, the rest can see https://ffmpeg.org/download.html)
sudo apt install ffmpeg
<a name="train"></a> Train
-
Download Datasets (DIV2K) and crop them by the script below (following our paper):
bash scripts/download_datasets.sh
-
Train: Please check opt.py to setup parameters you want
Step1 (Net L1 loss training): Runpython train_code/train.py
The model weights will be inside the folder 'saved_models'
Step2 (GAN Adversarial Training):
- Change opt['architecture'] in opt.py as "GRLGAN".
- Rename weights in 'saved_models' (either closest or the best, we use closest weight) to grlgan_pretrained.pth
- Run
python train_code/train.py --use_pretrained
<a name="inference"></a> Inference:
- Download the weight from https://drive.google.com/file/d/1Mbrw1ji_qcOteuSOkZqVgSEda_PQ40tA/view?usp=drive_link or https://github.com/Kiteretsu77/VCISR-official/releases/tag/v1.0 and put them in "saved_models" folder
- Setup the configuration of test_code/inference.py after line 215.
- Then, Execute
python test_code/inference.py
<a name="Anime_Extension"></a> Anime Extension:
We also extend our methods on the Anime Restoration and Super-Resolution task with public and private Anime datasets.
You can also find a pre-built highly accelerated Anime SR inference repository from:
https://github.com/Kiteretsu77/Anime_SR_Restoration (A regular inference tool) or
https://github.com/Kiteretsu77/FAST_Anime_VSR (An accelerated processing repository).
These two repositories are RRDB-based network training (instead of GRL). \
<a name="VC-RealLQ"></a> VC-RealLQ:
The small image inference dataset will be released soon. If you need it earlier, you can contact hikaridawn412316@gmail.com.
Citation
Please cite us if our work is useful for your research.
@article{wang2023vcisr,
title={VCISR: Blind Single Image Super-Resolution with Video Compression Synthetic Data},
author={Wang, Boyang and Liu, Bowen and Liu, Shiyu and Yang, Fengyu},
journal={arXiv preprint arXiv:2311.00996},
year={2023}
}
Disclaimer
This project is released for academic use only. The VC-RealLQ inference dataset is for personal use only without unauthorized distribution. We disclaim responsibility for the distribution of the dataset. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors.
License
This project is released under the GPL 3.0 license.
Contact
If you have any questions, please feel free to contact me at hikaridawn412316@gmail.com or boyangwa@umich.edu.