Awesome
Unidirectional Video Denoising by Mimicking Backward Recurrent Modules with Look-ahead Forward Ones
This source code for our paper "Unidirectional Video Denoising by Mimicking Backward Recurrent Modules with Look-ahead Forward Ones" (ECCV 2022)
Usage
Dependencies
You can create a conda environment with all the dependencies by running
conda env create -f requirements.yaml -n <env_name>
Datasets
For synthetic gaussian noise, DAVIS-2017-trainval-480p dataset is used for training, DAVIS-2017-test-dev-480p and Set8 are used for testing. For real world raw noise, CRVD dataset is used for training and testing.
Testing
Download pretrained models from Google Drive or Baidu Netdisk. We also provide denoised results (tractor from DAVIS-2017-test-dev-480p) for visual comparison.
- For synthetic gaussian noise,
cd test_models
python sRGB_test.py \
--model_file <path to model file> \
--test_path <path to test dataset>
- For real world raw noise,
cd test_models
python CRVD_test.py \
--model_file <path to model file> \
--test_path <path to CRVD dataset>
Training
- For synthetic gaussian noise,
cd train_models
python sRGB_train.py \
--trainset_dir <path to train dataset> \
--valset_dir <path to validation set> \
--log_dir <path to log dir>
- For real world raw noise,
cd train_models
python CRVD_train.py \
--CRVD_dir <path to CRVD dataset> \
--log_dir <path to log dir>
- For distributed training of synthetic gaussian noise,
cd train_models
python -m torch.distributed.launch --nproc_per_node=4 sRGB_train_distributed.py \
--trainset_dir <path to train dataset> \
--valset_dir <path to validation set> \
--log_dir <path to log dir>
Citation
If you find our work useful in your research or publication, please cite:
@article{li2022unidirectional,
title={Unidirectional Video Denoising by Mimicking Backward Recurrent Modules with Look-ahead Forward Ones},
author={Li, Junyi and Wu, Xiaohe and Niu, Zhenxing and Zuo, Wangmeng},
booktitle={ECCV},
year={2022}
}