Home

Awesome

RIS-GAN

This repository contains TensorFlow code for the paper titled "RIS-GAN: Explore Residual and Illumination with Generative Adversarial Networks for Shadow Removal" pdf (http://www.chengjianglong.com/publications/RISGAN_AAAI.pdf).

##RIS-GAN Architecture

Attached below is the architecture diagram of RIS-GAN as given in the paper. avatar

Notes:

  1. The GAN component is dervied from paper "Single Image Haze Removal using a Generative Adversarial Network".
  2. This RIS-GAN can be used for any application, and is not limited to Shadow removal.

Requirements:

Instructions:

  1. We use the pretrained VGG-19 on the ImageNet dataset to calculate Perceptual loss. We used the weights provided by machrisaa's implementation. Download the weights from this link and include it in this repository.

  2. Download the dataset.

  1. In case you want to use your own dataset, follow these instructions. If not, skip this step.
  1. Train the model by using the following code.
python main.py --A_dir A --B_dir B --mode train

The file main.py supports a lot of parameters, which are given a default value. You can set a new value to suit your needs.

  1. Test by using the follwing code.
python main.py --A_dir shadow --B_dir result --mode inference

##Sample results Attached below are some shadow removal results from the test set. avatar

Citation

If you use the code in your own research, please cite:

@InProceedings{Zhang:AAA2020,  
  title = {RIS-GAN: Explore Residual and Illumination with Generative Adversarial Networks for Shadow Removal},
  author = {Zhang, Ling and Long, Chengjiang and Zhang, Xiaolong and Xiao, Chunxia},
  booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
  year = {2020}
}

Depending on the setup you use, consider also citing paper "Single Image Haze Removal using a Generative Adversarial Network".