Awesome
URSCT-SESR: Reinforced Swin-Convs Transformer for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution
Tingdi Ren, Haiyong Xu, Gangyi Jiang, Mei Yu, Xuan Zhang, Biao Wang, and Ting Luo.
<a href="https://colab.research.google.com/drive/1CXQOHG_Yc5aQ3WvlQKLlHNLA89wXkRjA?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>
This repository is the official PyTorch implementation of URSCT-SESR: Reinforced Swin-Convs Transformer for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution <img width="1000" src="README_figs/network.png">
<img width="180" src="README_figs/RAW.png"><img width="180" src="README_figs/CF.png"><img width="180" src="README_figs/IBLA.png"><img width="180" src="README_figs/HL.png"><img width="180" src="README_figs/WATERNET.png"> <img width="180" src="README_figs/UWCNN.png"><img width="180" src="README_figs/UCOLOR.png"><img width="180" src="README_figs/USHAPED.png"><img width="180" src="README_figs/OUR.png">
Contents
QuickStart
Attention: please ensure the pytorch version be same with requirements.txt
Start a custom training
We have put demo data in folder "./dataset", hence you can run any file "*_train.py" in folder "./scripts".
Start a test with pre-trained model
If you want to use the pre-trained model for realistic images or testing, please read the following content about data settings. After that, run any file "*_eval.py" in folder "./scripts".
Start a fine-tuning with pre-trained model
If you have downloaded the pre-trained model and intend to continue training/fine-tuning, please note:
- Since the code updating, the pre-trained weight data (a dict in python) uploaded before does not include any parameter about the optimizer. Hence, please reasonably set up the optimizer (e.g., a tiny learning rate).
- The default model loaded when resuming is "*_bestSSIM.pth" (at line 84/85 in the training code), please check the model file name.
Training
1. Put your dataset into your folder storing data (for example "./dataset/demo_data_Enh") as follows:
URSCT-SESR<br /> ├─ other files and folders<br /> ├─ dataset<br /> │ ├─ demo_data_Enh<br /> │ │ ├─ train_data<br /> │ │ │ ├─ input<br /> │ │ │ │ ├─ fig1.png<br /> │ │ │ │ ├─ ...<br /> │ │ │ ├─ target<br /> │ │ │ │ ├─ fig1.png<br /> │ │ │ │ ├─ ...<br /> │ │ ├─ val_data<br /> │ │ │ │ ├─ ...<br /> │ │ ├─ test_data<br /> │ │ │ │ ├─ ...
2. Configure the configs/*.yaml:
If you want to train with the default setting, *_DIR of TRAINING and TEST is the main option you need to edit.
(1) Enh&SR_opt.yaml for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution
(2) Enh_opt.yaml for Underwater Sensing Scene Image Enhancement only
3. Run scripts/*_train.py
Testing
1. As reported above, put your dataset for testing and model we provided into the folders as follows:
URSCT-SESR<br /> ├─ other files and folders<br /> ├─ exps<br /> │ ├─ quickstart_Enh (same as configurated above)<br /> │ │ ├─ models<br /> │ │ ├─ model_bestSSIM.pth (downloaded model)<br /> ├─ dataset<br /> │ ├─ demo_data_Enh<br /> │ │ ├─ train_data<br /> │ │ ├─ val_data<br /> │ │ ├─ test_data<br /> │ │ │ ├─ input<br /> │ │ │ │ ├─ fig1.png<br /> │ │ │ │ ├─ ...<br /> │ │ │ ├─ target<br /> │ │ │ │ ├─ fig1.png<br /> │ │ │ │ ├─ ...<br />
2. Run scripts/*_eval.py
Download
Model
(1) GoogleDrive
(2) BaiduDisk (Password: SESR)
Dataset
(1) LSUI (UIE): Data Paper Homepage
(2) UIEB (UIE): Data Paper Homepage
(3) SQUID (UIE): Data Paper Homepage
(4) UFO (SESR): Data Paper Homepage
(5) USR (SR): Data Paper Homepage
Citation
@article{ren2022reinforced,
title={Reinforced Swin-convs Transformer for Simultaneous Underwater Sensing Scene Image Enhancement and Super-resolution},
author={Ren, Tingdi and Xu, Haiyong and Jiang, Gangyi and Yu, Mei and Zhang, Xuan and Wang, Biao and Luo, Ting},
journal={IEEE Transactions on Geoscience and Remote Sensing},
year={2022},
publisher={IEEE}
}