Awesome
Deep WaveNet
Wavelength-based Attributed Deep Neural Network for Underwater Image Restoration
accepted in ACM Transactions on Multimedia Computing, Communications, and Applications
Web-app has been released (basic version). Best viewed in Firefox latest version. Note that Heroku allows CPU-based computations only with limited memory. Hence, the app processes input image with a lower-resolution of 256x256. Use the above codes only to reproduce the original results.
Google Colab demo:
- This paper deals with the underwater image restoration.
- For this, we have considered two of the main low-level vision tasks,
- image enhancement, and
- super-resolution.
- For underwater image enhancement (uie), we have utilized publicly available datasets
- For super-resolution, we have used UFO-120 dataset.
- Below, we provide the detailed instructions for each task in a single README file to reproduce the original results.
arXiv version
Contents
- Results
- Prerequisites
- Datasets Preparation
- Usage
- Evaluation Metrics
- Processing underwater degraded videos
- For Underwater Semantic Segmentation and 2D pose Estimation Results
- License and Citations
- Send us feedback
- Acknowledgement
- Future Releases
Results
Prerequisites
Build Type | Linux | MacOS | Windows |
---|
Script | env | TBA | [TBA] |
Also, the codes work with minimum requirements as given below.
# tested with the following dependencies on Ubuntu 16.04 LTS system:
Python 3.5.2
Pytorch '1.0.1.post2'
torchvision 0.2.2
opencv 4.0.0
scipy 1.2.1
numpy 1.16.2
tqdm
To install using linux env
pip install -r requirements.txt
Datasets Preparation
To test the Deep WaveNet on EUVP dataset
cd uie_euvp
- Download the test_samples from EUVP dataset in the current directory (preferably).
-If you have downloaded somewhere else on the machine, set the absolute path of the test-set arguments in options.py
---testing_dir_inp
---testing_dir_gt
cd uie_uieb
- Download the UIEB and Challenging-60 datasets in the current directory (preferably).
- If you have downloaded somewhere else on the machine, set the absolute path of the test-set arguments in options.py
---testing_dir_inp
---testing_dir_gt
- The UIEB test set used in our paper is available at Google drive.
- Set the above arguments for this too if utilized.
- Note, the
challenging-60
set may not have ground truth clean images. So for that, you can leave the argument --testing_dir_gt
blank.
- Ours results on
challenging-60
set are available at Google drive.
To test the Deep WaveNet on UFO-120 dataset
cd super-resolution
- For super-resolution, we have provided separate code files for following configs:
2X
, 3X
, and 4X
.
- To test on
2X
, use the following steps:
cd 2X
- Download the UFO-120 test-set in the current directory (preferably).
- If you have already downloaded somewhere else on the machine, set the absolute path of the test-set arguments in options.py
---testing_dir_inp
---testing_dir_gt
lrd
consists of lower-resolution images, whereas hr
consists of corresponding high-resolution images.
- Repeat the above steps for remaining SR configs:
3X
, and 4X
.
For training
- Similar to test-sets, define training-set folders absolute path in the respective
options.py
file for each of the datasets above.
Usage
For testing
- Once the test-sets as described above are set.
- You can test a model for a given task using the following command:
export CUDA_VISIBLE_DEVICES=0 #[optional]
python test.py
- Results will be saved in
facades
folder of the pwd
.
For training
- Once the training-sets as described above are set.
- You can train a model for a given task using the following command:
export CUDA_VISIBLE_DEVICES=0 #[optional]
python train.py --checkpoints_dir --batch_size --learning_rate
Models
will be saved in --checkpoints_dir
with naming convention netG_[epoch].pt
.
- Demo codes for plotting loss curve during training are provided in utils/loss folder.
Evaluation Metrics
- Image quality metrics (IQMs) used in this work are provided with both Python and Matlab.
- Thanks to Funie-GAN for providing the python implementations of IQMs.
- Sample usage of IQMs in python implementation is provided at Line.
### compute SSIM and PSNR
SSIM_measures, PSNR_measures = SSIMs_PSNRs(CLEAN_DIR, result_dir)
print("SSIM on {0} samples".format(len(SSIM_measures))+"\n")
print("Mean: {0} std: {1}".format(np.mean(SSIM_measures), np.std(SSIM_measures))+"\n")
print("PSNR on {0} samples".format(len(PSNR_measures))+"\n")
print("Mean: {0} std: {1}".format(np.mean(PSNR_measures), np.std(PSNR_measures))+"\n")
measure_UIQMs(result_dir)
- Use the same for EUVP and UFO-120.
- For UIEB, we have utlized matlab implementations. Before running ssim_psnr.m, set clean path at Line1, Line2 and result path at Line.
- Below are the results you should get for EUVP dataset
Method | MSE | PSNR | SSIM |
---|
Deep WaveNet | .29 | 28.62 | .83 |
- Below are the results you should get for UIEB dataset
Method | MSE | PSNR | SSIM |
---|
Deep WaveNet | .60 | 21.57 | .80 |
- Below are the results you should get for UFO-120 dataset
Method | PSNR | SSIM | UIQM |
---|
Deep WaveNet (2X ) | 25.71 | .77 | 2.99 |
Deep WaveNet (3X ) | 25.23 | .76 | 2.96 |
Deep WaveNet (4X ) | 25.08 | .74 | 2.97 |
Processing underwater degraded videos
- We have also provided the sample codes for processing the degraded underwater videos.
- To run, use the following steps:
cd uw_video_processing
- Download the degraded underwater video in
pwd
.
- To demonstrate an e.g., we have used a copyright-free sample degraded video, which is provided as
degraded_video.mp4
in pwd
.
- To test custom degraded video, replace the video path at line.
- For the output enhanced video path, set line.
- To run, execute
python test.py
For Underwater Semantic Segmentation and 2D pose Estimation Results
- To generate segmentation maps on enhanced images, follow SUIM.
- Post-processing codes related to this are available in utils folder.
- To generate 2D pose of the human divers in an enhanced underwater image, follow OpenPose.
License and Citation
- The usage of this software is only for academic purposes. One can not use it for commercial products in any form.
- If you use this work or codes (for academic purposes only), please cite the following:
@misc{sharma2021wavelengthbased,
title={Wavelength-based Attributed Deep Neural Network for Underwater Image Restoration},
author={Prasen Kumar Sharma and Ira Bisht and Arijit Sur},
year={2021},
eprint={2106.07910},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
@article{islam2019fast,
title={Fast Underwater Image Enhancement for Improved Visual Perception},
author={Islam, Md Jahidul and Xia, Youya and Sattar, Junaed},
journal={IEEE Robotics and Automation Letters (RA-L)},
volume={5},
number={2},
pages={3227--3234},
year={2020},
publisher={IEEE}
}
@ARTICLE{8917818,
author={Li, Chongyi
and Guo, Chunle
and Ren, Wenqi
and Cong, Runmin
and Hou, Junhui
and Kwong, Sam
and Tao, Dacheng},
journal={IEEE Transactions on Image Processing},
title={An Underwater Image Enhancement Benchmark Dataset and Beyond},
year={2020},
volume={29},
number={},
pages={4376-4389},
doi={10.1109/TIP.2019.2955241}
}
@inproceedings{eriba2019kornia,
author = {E. Riba, D. Mishkin, D. Ponsa, E. Rublee and G. Bradski},
title = {Kornia: an Open Source Differentiable Computer Vision Library for PyTorch},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2020},
url = {https://arxiv.org/pdf/1910.02190.pdf}
}
@inproceedings{islam2020suim,
title={{Semantic Segmentation of Underwater Imagery: Dataset and Benchmark}},
author={Islam, Md Jahidul and Edge, Chelsey and Xiao, Yuyang and Luo, Peigen and Mehtaz,
Muntaqim and Morse, Christopher and Enan, Sadman Sakib and Sattar, Junaed},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2020},
organization={IEEE/RSJ}
}
@article{8765346,
author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
year = {2019}
}
Send us feedback
Acknowledgements
- For computing resources, we acknowledge the Department of Biotechnology, Govt. of India for the financial support for the project BT/COE/34/SP28408/2018.
- Some portion of the code are adapted from FUnIE-GAN. The authors greatfully acknowledge it!
- We acknowledge the support of publicly available datasets EUVP, UIEB, and UFO-120.
Future Releases
- We are in the process of releasing web-app soon. [Done]
- More flexible training modules using visdom to be added soon.