Home

Awesome

DC-ShadowNet (ICCV'2021)

Introduction

DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network<br> International Conference on Computer Vision (ICCV'2021)

Yeying Jin, Aashish Sharma and Robby T. Tan

arXiv [Paper] [Supplementary] [Poster] [Slides] [Video] [Zhihu]

PWC PWC

Prerequisites

git clone https://github.com/jinyeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal.git
cd DC-ShadowNet-Hard-and-Soft-Shadow-Removal/
conda create -n shadow python=3.7
conda activate shadow
conda install pytorch=1.10.2 torchvision torchaudio cudatoolkit=11.3 -c pytorch
python3 -m pip install -r requirements.txt

Datasets

  1. SRD Train|BaiduPan, Test. Shadow Masks

  2. AISTD|ISTD+ [link]

  3. ISTD [link]

  4. USR: Unpaired Shadow Removal Dataset [link]

  5. LRSS: Soft Shadow Dataset [link]<br> The LRSS dataset contains 134 shadow images (62 pairs of shadow and shadow-free images). <br> We use 34 pairs for testing and 100 shadow images for training. <br> For shadow-free training images, 28 from LRSS and 72 randomly selected from the USR dataset.<br>

    [Dropbox][BaiduPan(code:t9c7)]

Pre-trained Models and Shadow Removal Results: [Dropbox] | [BaiduPan(code:gr59)]

DatasetModel DropboxModel BaiduPanModel Put in PathResults DropboxResults BaiduPan
SRD[Dropbox][BaiduPan(code:zhd2)]results/SRD/model/[Dropbox][BaiduPan(code:28bv)]
AISTD/ISTD+[Dropbox][BaiduPan(code:cfn9)]results/AISTD/model/[Dropbox][BaiduPan(code:3waf)]
ISTD[Dropbox][BaiduPan(code:b8o0)]results/ISTD/model/[Dropbox]BaiduPan(code:hh4n)
USR[Dropbox]BaiduPan(code:e0a8)results/USR/model/[Dropbox][BaiduPan(code:u7ec)]
LRSS---[Dropbox][BaiduPan(code:bbns)]

Single Image Test

  1. Download the pre-trained SRD model [Dropbox] | [BaiduPan(code:zhd2)], put in results/SRD/model/
  2. Put the test images in test_input, results in: results/output/ <br>
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- test_input           ## Shadow
|-- results
    |-- output           ## Results
CUDA_VISIBLE_DEVICES='0' python main_test_single.py
<p align="left"> <img width=350" src="results/SRD/500000/inputA_outputB/IMG_6456.png" > </p>

Dataset Test

  1. Download the pre-trained models (the table above) and place them in the results/dataset/model/
  2. For the SRD test dataset /dataset/SRD/testA/, results in: results/SRD/500000(iteration)/outputB/
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- dataset
    |-- SRD
      |-- testA           ## Shadow
    |-- AISTD
      |-- testA           ## Shadow
    |-- USR
      |-- testA           ## Shadow
|-- results
    |-- SRD
      |-- model           ## SRD_params_0500000.pt
      |-- 500000/outputB/ ## Results
    |-- AISTD
      |-- model           ## AISTD_params_0500000.pt
      |-- 500000/outputB/ ## Results
    |-- ISTD
      |-- model           ## ISTD_params_0600000.pt
      |-- 600000/outputB/ ## Results
    |-- USR
      |-- model           ## USR_params_0600000.pt
      |-- 600000/outputB/ ## Results
CUDA_VISIBLE_DEVICES='0' python main_test.py --dataset SRD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/SRD --use_original_name True --im_suf_A .jpg
CUDA_VISIBLE_DEVICES='0' python main_test.py --dataset AISTD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/AISTD --use_original_name True --im_suf_A .png
CUDA_VISIBLE_DEVICES='0' python main_test.py --dataset ISTD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/ISTD --use_original_name True --im_suf_A .png
CUDA_VISIBLE_DEVICES='0' python main_test.py --dataset USR --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/USR --use_original_name True --im_suf_A .jpg
<p align="left"> <img width=550" src="teaser/hard_shadow.PNG"> </p> <p align="left"> <img width=550" src="teaser/soft_shadow.PNG"> </p>

Train

${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- dataset
    |-- SRD
      |-- trainA ## Shadow 
      |-- trainB ## Shadow-free 
      |-- testA  ## Shadow 
      |-- testB  ## Shadow-free 
CUDA_VISIBLE_DEVICES='0' python main_train.py --dataset SRD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/SRD --iteration 1000000

Train with Shadow-Free Chromaticity Loss

CUDA_VISIBLE_DEVICES='0' python main_train.py --dataset SRD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/SRD --iteration 1000000 --use_ch_loss True
${DC-ShadowNet-Hard-and-Soft-Shadow-Removal}
|-- dataset
    |-- SRD
      |-- trainA ## Shadow 
      |-- trainB ## Shadow-free 
      |-- trainC ## Shadow-Free Chromaticity Maps after Illumination Compensation
      |-- testA  ## Shadow 
      |-- testB  ## Shadow-free
      |-- testC  ## Shadow-Free Chromaticity Maps after Illumination Compensation

The trainC and testC folder are the results of 0_Shadow-Free_Chromaticity_matlab/physics_all.m

SRD trainC [Dropbox] [BaiduPan(code:srdc)]SRD testC [Dropbox] [BaiduPan(code:srdc)]ISTD trainC [Dropbox] [BaiduPan(code:istd)]ISTD testC [Dropbox][BaiduPan(code:istd)]USR trainC [Dropbox] [BaiduPan(code:usrc)]USR testC [Dropbox] [BaiduPan(code:usrc)]LRSS trainC [Dropbox] [BaiduPan(code:lrss)]LRSS testC [Dropbox] [BaiduPan(code:lrss)]

Option 1 MATLAB: inputs and results

0_Shadow-Free_Chromaticity_matlab/physics_all.m

Option 2 Python: inputs and results

cd 0_Shadow-Free_Chromaticity_python
python physics_all.py
<p align="left"> <img width=450" src="teaser/chromaticity.png"> </p>

Train with Shadow-Robust Feature Loss

CUDA_VISIBLE_DEVICES='0' python main_train.py --dataset SRD --datasetpath /home1/yeying/DC-ShadowNet-Hard-and-Soft-Shadow-Removal/dataset/SRD --iteration 1000000 --use_pecp_loss True

Get the following Figure 5 in the main paper, VGG feature visualization results

cd feature_release
python test_VGGfeatures.py
<p align="left"> <img width=350" src="teaser/feature_map.png"> </p>

Evaluation

The root mean squared error (RMSE) evaluation code used by all methods (including ours) computes mean absolute error (MAE).

1. SRD Dataset Evaluation

set the paths of the shadow removal result and the dataset in evaluation/demo_srd_release.m and then run it.

demo_srd_release.m

Get the following Table 1 in the main paper on the SRD (size: 256x256):

MethodTrainingAllShadowNon-Shadow
DC-ShadowNetUnpaired4.667.703.39
Input ImageN/A13.7737.403.96

For SRD (size: 640x840):

MethodTrainingAllShadowNon-Shadow
DC-ShadowNetUnpaired6.579.845.52

2. AISTD Dataset Evaluation

set the paths of the shadow removal result and the dataset in evaluation/demo_aistd_release.m and then run it.

demo_aistd_release.m

Get the following Table 2 in the main paper on the AISTD (size: 256x256):

MethodTrainingAllShadowNon-Shadow
DC-ShadowNetUnpaired4.710.63.7

For AISTD (size: 480x640):

MethodTrainingAllShadowNon-Shadow
DC-ShadowNetUnpaired6.3311.375.38

3. LRSS Soft Shadow Dataset Evaluation

set the paths of the shadow removal result and the dataset in evaluation/demo_lrss_release.m and then run it.

demo_lrss_release.m

Get the following Table 3 in the main paper on the LRSS dataset (size: 256x256):

MethodTrainingAll
DC-ShadowNetUnpaired3.48
Input ImageN/A12.26

Acknowledgments

Code is implemented based U-GAT-IT, we would like to thank them. <br> One trick used in networks.py is to change out = self.UpBlock2(x) to out = (self.UpBlock2(x)+input).tanh() to learn a residual.

License

The code and models in this repository are licensed under the MIT License for academic and other non-commercial uses.<br> For commercial use of the code and models, separate commercial licensing is available. Please contact:

Citation

If this work is useful for your research, please cite our paper.

@inproceedings{jin2021dc,
  title={DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network},
  author={Jin, Yeying and Sharma, Aashish and Tan, Robby T},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={5027--5036},
  year={2021}
}

@inproceedings{jin2024des3,
  title={DeS3: Adaptive Attention-Driven Self and Soft Shadow Removal Using ViT Similarity},
  author={Jin, Yeying and Ye, Wei and Yang, Wenhan and Yuan, Yuan and Tan, Robby T},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={3},
  pages={2634--2642},
  year={2024}
}