Home

Awesome

ShadowFormer (AAAI'23)

This is the official implementation of the AAAI 2023 paper ShadowFormer: Global Context Helps Image Shadow Removal.

PWC PWC PWC

News

Introduction

To tackle image shadow removal problem, we propose a novel transformer-based method, dubbed ShadowFormer, for exploiting non-shadow regions to help shadow region restoration. A multi-scale channel attention framework is employed to hierarchically capture the global information. Based on that, we propose a Shadow-Interaction Module (SIM) with Shadow-Interaction Attention (SIA) in the bottleneck stage to effectively model the context correlation between shadow and non-shadow regions. For more details, please refer to our original paper

<p align=center><img width="80%" src="doc/pipeline.jpg"/></p> <p align=center><img width="80%" src="doc/details.jpg"/></p>

Requirement

pip install -r requirements.txt

Datasets

Pretrained models

ISTD | ISTD+ | SRD

Please download the corresponding pretrained model and modify the weights in test.py.

Test

You can directly test the performance of the pre-trained model as follows

  1. Modify the paths to dataset and pre-trained model. You need to modify the following path in the test.py
input_dir # shadow image input path -- Line 27
weights # pretrained model path -- Line 31
  1. Test the model
python test.py --save_images

You can check the output in ./results.

Train

  1. Download datasets and set the following structure
|-- ISTD_Dataset
    |-- train
        |-- train_A # shadow image
        |-- train_B # shadow mask
        |-- train_C # shadow-free GT
    |-- test
        |-- test_A # shadow image
        |-- test_B # shadow mask
        |-- test_C # shadow-free GT
  1. You need to modify the following terms in option.py
train_dir  # training set path
val_dir   # testing set path
gpu: 0 # Our model can be trained using a single RTX A5000 GPU. You can also train the model using multiple GPUs by adding more GPU ids in it.
  1. Train the network If you want to train the network on 256X256 images:
python train.py --warmup --win_size 8 --train_ps 256

or you want to train on original resolution, e.g., 480X640 for ISTD:

python train.py --warmup --win_size 10 --train_ps 320

Evaluation

The results reported in the paper are calculated by the matlab script used in previous method. Details refer to evaluation/measure_shadow.m. We also provide the python code for calculating the metrics in test.py, using python test.py --cal_metrics to print.

Results

Evaluation on ISTD

The evaluation results on ISTD are as follows

MethodPSNRSSIMRMSE
ST-CGAN27.440.9296.65
DSC29.000.9445.59
DHAN29.110.9545.66
Fu et al.27.190.9455.88
Zhu et al.29.850.9604.27
ShadowFormer (Ours)32.210.9684.09

Visual Results

<p align=center><img width="80%" src="doc/res.jpg"/></p>

Testing results

The testing results on dataset ISTD, ISTD+, SRD are: results

References

Our implementation is based on Uformer and Restormer. We would like to thank them.

Citation

Preprint available here.

In case of use, please cite our publication:

L. Guo, S. Huang, D. Liu, H. Cheng and B. Wen, "ShadowFormer: Global Context Helps Image Shadow Removal," AAAI 2023.

Bibtex:

@article{guo2023shadowformer,
  title={ShadowFormer: Global Context Helps Image Shadow Removal},
  author={Guo, Lanqing and Huang, Siyu and Liu, Ding and Cheng, Hao and Wen, Bihan},
  journal={arXiv preprint arXiv:2302.01650},
  year={2023}
}

Contact

If you have any questions, please contact lanqing001@e.ntu.edu.sg