Awesome
SelfDZSR (ECCV 2022)
Official PyTorch implementation of SelfDZSR
Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations <br> ECCV, 2022 <br> Zhilu Zhang, Ruohao Wang, Hongzhi Zhang, Yunjin Chen, Wangmeng Zuo <br>Harbin Institute of Technology, China
The extended version of SelfDZSR has been accepted by IEEE TPAMI in 2024.
Self-Supervised Learning for Real-World Super-Resolution from Dual and Multiple Zoomed Observations <br> IEEE TPAMI, 2024 <br> Zhilu Zhang, Ruohao Wang, Hongzhi Zhang, Wangmeng Zuo <br>Harbin Institute of Technology, China <br>GitHub: https://github.com/cszhilu1998/SelfDZSR_PlusPlus
1. Framework
<p align="center"><img src="introduction.png" width="95%"></p> <p align="center">Overall pipeline of proposed SelfDZSR in the training and testing phase.</p>-
In the training, the center part of the short-focus and telephoto image is cropped respectively as the input LR and Ref, and the whole telephoto image is taken as the GT. The auxiliary-LR is generated to guide the alignment of LR and Ref towards GT.
-
In the testing, SelfDZSR can be directly deployed to super-solve the whole short-focus image with the reference of the telephoto image.
2. Preparation and Datasets
-
Prerequisites
- Python 3.x and PyTorch 1.6.
- OpenCV, NumPy, Pillow, tqdm, lpips, scikit-image and tensorboardX.
-
Dataset
-
Nikon camera images and CameraFusion dataset can be downloaded from this link.
-
-
Data pre-processing
- If you want to pre-process additional short-focus images and telephoto images, we provide a demo in
./data_preprocess
. (2022/9/13)
- If you want to pre-process additional short-focus images and telephoto images, we provide a demo in
3. Quick Start
3.1 Pre-trained models
-
For simplifying the training process, we provide the pre-trained models of feature extractors and auxiliary-LR generator. The models for Nikon camera images and CameraFusion dataset are put in the
./ckpt/nikon_pretrain_models/
and./ckpt/camerafusion_pretrain_models/
folder, respectively. -
For direct testing, we provide the four pre-trained DZSR models (
nikon_l1
,nikon_l1sw
,camerafusion_l1
andcamerafusion_l1sw
) in the./ckpt/
folder. Takingnikon_l1sw
as an example, it represents the model trained on the Nikon camera images using $l_1$ and sliced Wasserstein (SW) loss terms.
3.2 Training
-
For Nikon camera images, modify
dataroot
intrain_nikon.sh
and then run: -
For CameraFusion dataset, modify
dataroot
intrain_camerafusion.sh
and then run:
3.3 Testing
-
For Nikon camera images, modify
dataroot
intest_nikon.sh
and then run: -
For CameraFusion dataset, modify
dataroot
intest_camerafusion.sh
and then run:
3.4 Note
- You can specify which GPU to use by
--gpu_ids
, e.g.,--gpu_ids 0,1
,--gpu_ids 3
,--gpu_ids -1
(for CPU mode). In the default setting, all GPUs are used. - You can refer to options for more arguments.
4. Citation
If you find it useful in your research, please consider citing:
@inproceedings{SelfDZSR,
title={Self-Supervised Learning for Real-World Super-Resolution from Dual Zoomed Observations},
author={Zhang, Zhilu and Wang, Ruohao and Zhang, Hongzhi and Chen, Yunjin and Zuo, Wangmeng},
booktitle={European Conference on Computer Vision (ECCV)},
year={2022}
}
@article{SelfDZSR_PlusPlus,
title={Self-Supervised Learning for Real-World Super-Resolution from Dual and Multiple Zoomed Observations},
author={Zhang, Zhilu and Wang, Ruohao and Zhang, Hongzhi and Zuo, Wangmeng},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
year={2024},
publisher={IEEE}
}
5. Acknowledgement
This repo is built upon the framework of CycleGAN, and we borrow some code from C2-Matching and DCSR, thanks for their excellent work!