Awesome
SADNet (ECCV, 2020)
By Meng Chang, Qi Li, Huajun Feng, Zhihai Xu
This is the official Pytorch implementation of "Spatial-Adaptive Network for Single Image Denoising" [Paper]
(Noting: The source code is a coarse version for reference and the model provided may not be optimal.)
Prerequisites
- Python 3.6
- Pytorch 1.1
- CUDA 9.0
Get Started
Installation
Update:We implement Deformable ConvNets V2 on torchvision.ops.deform_conv2d
. If torchvision>=0.9.0 (pytorch >= 1.8.0) in your environment, you don't need follow the instructions below to install DCNv2.
The Deformable ConvNets V2 (DCNv2) module in our code adopts chengdazhi's implementation.
You can compile the code according to your machine.
cd ./dcn
python setup.py develop
Please make sure your machine has a GPU, which is required for the DCNv2 module.
Train
- Download the training dataset and use
gen_dataset_*.py
to package them in the h5py format. - Place the h5py file in
/dataset/train/
or set the 'src_path' inoption.py
to your own path. - You can set any training parameters in
option.py
. After that, train the model:
cd $SADNet_ROOT
python train.py
Test
- Download the trained models from Google Drive/Baidu Drive(code:l9qr) and place them in
/ckpt/
. - Place the testing dataset in
/dataset/test/
or set the testing path inoption.py
to your own path. - Set the parameters in
option.py
(eg. 'epoch_test', 'gray' and etc.) - test the trained models:
cd $SADNet_ROOT
python test.py
Citation
If you find the code helpful in your research or work, please cite the following papers.
@article{chang2020spatial,
title={Spatial-Adaptive Network for Single Image Denoising},
author={Chang, Meng and Li, Qi and Feng, Huajun and Xu, Zhihai},
journal={arXiv preprint arXiv:2001.10291},
year={2020}
}
Acknowledgments
The DCNv2 module in our code adopts from chengdazhi's implementation.