Home

Awesome

Towards Ghost-free Shadow Removal

This repo contains the code and results of the AAAI 2020 paper:

<i><b> Towards Ghost-free Shadow Removal via <br> Dual Hierarchical Aggregation Network and Shadow Matting GAN </b></i><br> Xiaodong Cun, Chi-Man Pun<sup>*</sup>, Cheng Shi <br> University of Macau

Syn. Datasets | Models | Results | Paper | Supp. | Poster | 🔥Online Demo!(Google CoLab)

<img width='100%' src='https://user-images.githubusercontent.com/4397546/69003615-582b2180-0940-11ea-9faa-2f2ae6b1d5ba.png'/>

<i>We plot a result of our model with the input shown in the yellow square. From two zoomed regions, our method removes the shadow and reduces the ghost successfully.</i>

Known Issues

#4 inconsistency between the code and Figure.2, Thanks @naoto0804

SOME LINKS ARE BROKEN due to only OneDrive account, I will try to recover it soon.

Introduction

<p style="text-align:justify"><i>Shadow removal is an essential task for scene understanding. Many studies consider only matching the image contents, which often causes two types of ghosts: color in-consistencies in shadow regions or artifacts on shadow boundaries. In this paper, we try to tackle these issues in two aspects. On the one hand, to carefully learn the border artifacts-free image, we propose a novel network structure named the Dual Hierarchically Aggregation Network(DHAN). It contains a series of growth dilated convolutions as the backbone without any down-samplings, and we hierarchically aggregate multi-context features for attention and prediction respectively. On the other hand, we argue that training on a limited dataset restricts the textural understanding of the network, which leads to the shadow region color in-consistencies. Currently, the largest dataset contains 2k+ shadow/shadow-free images in pairs. However, it has only 0.1k+ unique scenes since many samples share exactly the same background with different shadow positions. Thus, we design a Shadow Matting Generative Adversarial Network~(SMGAN) to synthesize realistic shadow mattings from a given shadow mask and shadow-free image. With the help of novel masks or scenes, we enhance the current datasets using synthesized shadow images. Experiments show that our DHAN can erase the shadows and produce high-quality ghost-free images. After training on the synthesized and real datasets, our network outperforms other state-of-the-art methods by a large margin. </i></p>

Sample Comparison

fig1857_5

<i>Comparison of the shadow removal datasets, The first two samples are from the ISTD dataset while the bottom two samples are from the SRD dataset. In (d), the top two samples are from ST-CGAN and the bottom two samples are from DeShadowNet.</i>

Resources

Other Resources

Setup

Creating the conda environments following here.

Demo

1. Local ipynb demo

  1. download the pre-trained model from above. SRD+ is recommended.
  2. download pretrained-vgg19 from MatConvNet.
  3. Uncompress pre-trained models into 'Models/' as shown in the folders.
  4. Starting a jupyter server and run the demo code following the instructions in demo.ipynb

It has been tested both in MacOS 10.15 and Ubuntu 18.04 LTS. Both CPU and GPU are supported (But running on CPU is quite slow).

2. Online google colab demo

an online shadow removal demo is hosted in Google CoLab by this url

an online shadow synthesis demo is hosted in Google CoLab by this url

3. Demo from command line (Thanks @aliericcantona)

python demo.py --model PATH_TO_PRETRAINED_MODEL --vgg_19_path PATH_TO_VGG19 --input_dir SAMPLES_DIR --result_dir RESULTS_DIR

Training

The data folders should be:

ISTD_DATA_ROOT
    * train
        - train_A # shadow image
        - train_B # shadow mask
        - train_C # shadowfree image
        - shadow_free # USR shadowfree images
        - synC # our Syn. shadow
    * test
        - test_A # shadow image
        - test_B # shadow mask
        - test_C # shadowfree image

SRD_DATA_ROOT
    * train
        - train_A # renaming the original `shadow` folder in `SRD`.
        - train_B # the extracted shadow mask by ourself.
        - train_C # renaming the original `shadow_free` folder in `SRD`.
        - shadow_free # USR shadowfree images
        - synC # our Syn. shadow
    * test
        - train_A # renaming the original `shadow` folder in `SRD`.
        - train_B # the extracted shadow mask by ourself.
        - train_C # renaming the original `shadow_free` folder in `SRD`.

1. Generating Synthesized Shadow

Downloading the ISTD from the source, download the USR dataset and unzip it into unzip it into $YOUR_DATA_ROOT/ISTD_dataset/train/. Train the GAN by:

python train_ss.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/train/ \
--use_gpu 0 # <0 for CPU \
--is_training 1 # 0 for testing \

2. Training on the ISTD dataset

Downloading the ISTD from the source, download our synthesized dataset and unzip it into $YOUR_DATA_ROOT/ISTD_dataset/train/. Train the network by:

python train_sr.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/train/ \
--use_gpu 1 # <0 for cpu \
--is_training 1 # 0 for testing \
--use_da 0.5 # the percentage of synthesized dataset

3. Training on SRD dataset

Download and unzip the SRD dataset from the source. Reorganizing the dataset as described above.

python train_sr.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$SRD_DATASET_ROOT/train/ \
--use_gpu 1 # <0 for cpu \
--is_training 1 # 0 for testing \
--use_da 0.5 # the percentage of synthesized dataset

Test

# ISTD DATASET
python train_sr.py \
--task YOUR_TASK_NAME # path to the pre-trained model [logs/YOUR_TASK_NAME] \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/test/ \
--use_gpu 1 # <0 for cpu \
--is_training 0 # 0 for testing \

# SRD DATASET
python train_sr.py \
--task YOUR_TASK_NAME # path to the pre-trained model [logs/YOUR_TASK_NAME] \
--data_dir $YOUR_DATA_ROOT/$SRD_DATASET_ROOT/test/ \
--use_gpu 1 # <0 for cpu \
--is_training 0 # 0 for testing \

Acknowledgements

The author would like to thanks Nan Chen for her helpful discussion.

Part of the code is based upon FastImageProcessing and Perception Reflection Removal

Citation

If you find our work useful in your research, please consider citing:

@misc{cun2019ghostfree,
    title={Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN},
    author={Xiaodong Cun and Chi-Man Pun and Cheng Shi},
    year={2019},
    eprint={1911.08718},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Contact

Please contact me if there is any question (Xiaodong Cun yb87432@um.edu.mo)

Related Works

Zhang, Xuaner, Ren Ng, and Qifeng Chen. "Single Image Reflection Separation with Perceptual Losses." Proceedings of the CVPR. (2018).

Hu, Xiaowei, et al. "Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data." Proceedings of the ICCV (2019).

HitCount