Awesome
Event-Aware Video Deraining via Multi-Patch Progressive Learning
Requirment
Python 3.8 (previous version should be fine as well)
Numpy
PyTorch 1.9 (previous version should be fine as well)
torchvision 0.10.1
Visual Results
Models | NTURain | RainVIDSS | Real-World |
---|---|---|---|
MSCSC | Google Drive | Google Drive | Google Drive |
SLDNet | Google Drive | Google Drive | Google Drive |
S2VD | Google Drive | Google Drive | Google Drive |
MFGAN | Google Drive | Google Drive | Google Drive |
Ours | Google Drive | Google Drive | Google Drive |
Data preparation
NTURain
-
Download the NTURain dataset from https://github.com/hotndy/SPAC-SupplementaryMaterials
-
Extract all clips from .rar files
-
The extracted files should have the following structure:
├── Dataset_Testing_RealRain
├── ra1_Rain
...
├── rb3_Rain
├── Dataset_Testing_Synthetic
├── a1_Rain
├── a1_GT
...
├── b4_Rain
├── b4_GT
├── Dataset_Training_Synthetic
├── t1_Rain_01
...
├── t8_Rain_03
RainVID&SS
It is available in Baidu Disk or Google Drive.
Event
-
Follow the steps in https://github.com/uzh-rpg/rpg_esim to install the event-camera simulator, ESIM.
-
Simulate event for all NTURain clips. A script in utils/make_event_NTURain.sh is helpful. You can follow the instruction in https://github.com/uzh-rpg/rpg_esim/wiki/Simulating-events-from-a-video as well.
-
The obtained event streams are in .bag format. They should be extracted and stored in npy or jpg format. A script in utils/event2img.sh is helpful.
-
To follow the silence and anonymity policy, we have not shared our pre-computed event files online.
Or you can download the pre-generated event of NTURain in: https://pan.baidu.com/s/1PxHjHSOAW5Q04rsBbWp-UQ?pwd=2aol (PIN: 2aol)
Train
-
Modify the confiurations in train_mpevnet.sh
-
Since we borrow the reimplementation of lightflownet3 from https://github.com/lhao0301/pytorch-liteflownet3 and https://github.com/NVIDIA/flownet2-pytorch, you should follow their step of installing correlation_package.
-
run the code
bash train_mpevnet.sh
If it is the first time to run the code, you should add "--preprocess" argument to get pre-processed .h5 file.
Test for NTURain
-
Modify the confiurations in test_mpevnet.sh
-
run the code
bash test_mpevnet.sh
Test for real video
-
Modify the confiurations in test_mpevnet_others.sh
-
run the code
bash test_mpevnet_others.sh
BibTex
@ARTICLE{10122854,
author={Sun, Shangquan and Ren, Wenqi and Li, Jingzhi and Zhang, Kaihao and Liang, Meiyu and Cao, Xiaochun},
journal={IEEE Transactions on Image Processing},
title={Event-Aware Video Deraining via Multi-Patch Progressive Learning},
year={2023},
volume={32},
number={},
pages={3040-3053},
doi={10.1109/TIP.2023.3272283}}