Home

Awesome

Progressive Image Deraining Networks: A Better and Simpler Baseline

[arxiv] [pdf] [supp]

Introduction

This paper provides a better and simpler baseline deraining network by discussing network architecture, input and output, and loss functions. Specifically, by repeatedly unfolding a shallow ResNet, progressive ResNet (PRN) is proposed to take advantage of recursive computation. A recurrent layer is further introduced to exploit the dependencies of deep features across stages, forming our progressive recurrent network (PReNet). Furthermore, intra-stage recursive computation of ResNet can be adopted in PRN and PReNet to notably reduce network parameters with graceful degradation in deraining performance (PRN_r and PReNet_r). For network input and output, we take both stage-wise result and original rainy image as input to each ResNet and finally output the prediction of residual image. As for loss functions, single MSE or negative SSIM losses are sufficient to train PRN and PReNet. Experiments show that PRN and PReNet perform favorably on both synthetic and real rainy images. Considering its simplicity, efficiency and effectiveness, our models are expected to serve as a suitable baseline in future deraining research.

Prerequisites

Datasets

PRN and PReNet are evaluated on four datasets*: Rain100H [1], Rain100L [1], Rain12 [2] and Rain1400 [3]. Please download the testing datasets from BaiduYun or OneDrive, and place the unzipped folders into ./datasets/test/.

To train the models, please download training datasets: RainTrainH [1], RainTrainL [1] and Rain12600 [3] from BaiduYun or OneDrive, and place the unzipped folders into ./datasets/train/.

*We note that:

(i) The datasets in the website of [1] seem to be modified. But the models and results in recent papers are all based on the previous version, and thus we upload the original training and testing datasets to BaiduYun and OneDrive.

(ii) For RainTrainH, we strictly exclude 546 rainy images that have the same background contents with testing images. All our models are trained on remaining 1,254 training samples.

Getting Started

1) Testing

We have placed our pre-trained models into ./logs/.

Run shell scripts to test the models:

bash test_Rain100H.sh   # test models on Rain100H
bash test_Rain100L.sh   # test models on Rain100L
bash test_Rain12.sh     # test models on Rain12
bash test_Rain1400.sh   # test models on Rain1400 
bash test_Ablation.sh   # test models in Ablation Study
bash test_real.sh       # test PReNet on real rainy images

All the results in the paper are also available at BaiduYun. You can place the downloaded results into ./results/, and directly compute all the evaluation metrics in this paper.

2) Evaluation metrics

We also provide the MATLAB scripts to compute the average PSNR and SSIM values reported in the paper.

 cd ./statistic
 run statistic_Rain100H.m
 run statistic_Rain100L.m
 run statistic_Rain12.m
 run statistic_Rain1400.m
 run statistic_Ablation.m  # compute the metrics in Ablation Study

Average PSNR/SSIM values on four datasets:

DatasetPRNPReNetPRN_rPReNet_rJORDER[1]RESCAN[4]
Rain100H28.07/0.88429.46/0.89927.43/0.87428.98/0.89226.54/0.83528.88/0.866
Rain100L36.99/0.97737.48/0.97936.11/0.97337.10/0.97736.61/0.974---
Rain1236.62/0.95236.66/0.96136.16/0.96136.69/0.96233.92/0.953---
Rain140031.69/0.94132.60/0.94631.31/0.93732.44/0.944------

*We note that:

(i) The metrics by JORDER[1] are computed directly based on the deraining images provided by the authors.

(ii) RESCAN[4] is re-trained with their default settings: (1) RESCAN for Rain100H is trained on the full 1800 rainy images, while our models are all trained on the strict 1254 rainy images. (2) The re-trained model of RESCAN is available at here.

(iii) The deraining results by JORDER and RESCAN can be downloaded from here, and their metrics in the above table can be computed by the Matlab scripts.

3) Training

Run shell scripts to train the models:

bash train_PReNet.sh      
bash train_PRN.sh   
bash train_PReNet_r.sh    
bash train_PRN_r.sh  

You can use tensorboard --logdir ./logs/your_model_path to check the training procedures.

Model Configuration

The following tables provide the configurations of options.

Training Mode Configurations

OptionDefaultDescription
batchSize18Training batch size
recurrent_iter6Number of recursive stages
epochs100Number of training epochs
milestone[30,50,80]When to decay learning rate
lr1e-3Initial learning rate
save_freq1save intermediate model
use_GPUTrueuse GPU or not
gpu_id0GPU id
data_pathN/Apath to training images
save_pathN/Apath to save models and status

Testing Mode Configurations

OptionDefaultDescription
use_GPUTrueuse GPU or not
gpu_id0GPU id
recurrent_iter6Number of recursive stages
logdirN/Apath to trained model
data_pathN/Apath to testing images
save_pathN/Apath to save results

References

[1] Yang W, Tan RT, Feng J, Liu J, Guo Z, Yan S. Deep joint rain detection and removal from a single image. In IEEE CVPR 2017.

[2] Li Y, Tan RT, Guo X, Lu J, Brown MS. Rain streak removal using layer priors. In IEEE CVPR 2016.

[3] Fu X, Huang J, Zeng D, Huang Y, Ding X, Paisley J. Removing rain from single images via a deep detail network. In IEEE CVPR 2017.

[4] Li X, Wu J, Lin Z, Liu H, Zha H. Recurrent squeeze-and-excitation context aggregation net for single image deraining.In ECCV 2018.

Citation

 @inproceedings{ren2019progressive,
   title={Progressive Image Deraining Networks: A Better and Simpler Baseline},
   author={Ren, Dongwei and Zuo, Wangmeng and Hu, Qinghua and Zhu, Pengfei and Meng, Deyu},
   booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
   year={2019},
 }