Awesome
Depth-Aware-U-shape-Transformer (DAUT)
Underwater Image Enhancement using Depth Aware U-shape Transformer
The official implementation of (DAUT) depth aware U-shape transformer.
based on u-shape transformer paper, implementation
and uses (DPT) Vision Transformers for Dense Prediction paper, implementation
below visual comparison between DAUT against U-shape Transformer and other physical and non-physical model enhancement methods, enhancing sampled images from our test dataset of different water types. it can be seen DAUT achieves state-of-the-art results.
<p align="center"> <img width="800" src="./figs/f5.png"> </p> <p align="center"> <img width="800" src="./figs/f4.png"> </p>Setup
install required dependencies from requirement.txt
Training
<p align="center"> <img width="800" src="./figs/f2.png"> <!-- <p><b>Training Pipeline of Our Depth Aware U-shape Transformer Network</b></p> --> </p>dataset will be uploaded
Testing
<p align="center"> <img width="800" src="./figs/f1.png"> <!-- <p><b>The Depth Aware U-shape Transformer Network</b></p> --> </p>Estimate depth maps for underwater images
- Download DPT model weights and put it in DPT/weights/
- Put underwater images in DPT/input/
- Run python DPT/run_segmentation.py
- Depth maps will be created in DPT/output_monodepth/
- Move Depth images “*.png” from DPT/output_monodepth/ to test/depth/
Enhance underwater images
- Download DAUT model weights and put it in saved_models/G/
- Put underwater images in test/input/ "along with depth images in test/depth/"
- Run test.ipynb
- Enhanced images will be in test/output/
We provided few sample images with their depth for testing in test directory so you can run test.ipynb after downloading DAUT’s weights only