Home

Awesome

Semantic Image Inpainting TensorFlow

This repository is a Tensorflow implementation of the Semantic Image Inpainting with Deep Generative Models, CVPR2017.

<p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43243280-d4e8a3c0-90e0-11e8-8495-b768427019bb.png") </p>

Requirements

Semantic Image Inpainting

  1. celebA
<p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43244581-48360a66-90e6-11e8-823c-a71d957ed73b.png"> </p>
  1. SVHN
<p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43244654-98d56cdc-90e6-11e8-8f0f-4695d3d3ebe4.png"> </p>
  1. Failure Examples
<p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43245170-4eefe500-90e8-11e8-8f49-a47680de2efe.png"> </p>

Documentation

Download Dataset

  1. celebA Dataset Use the following command to download CelebA dataset and copy the CelebA dataset on the corresponding file as introduced in Directory Hierarchy information. Manually remove approximately 2,000 images from the dataset for testing, put them on the val folder and others in the `train' folder.
python download.py celebA
  1. SVHN Dataset
    Download SVHN data from The Street View House Numbers (SVHN) Dataset website. Two mat files you need to download are train_32x32.mat and test_32x32.mat in Cropped Digits Format 2.

Directory Hierarchy

.
│   semantic_image_inpainting
│   ├── src
│   │   ├── dataset.py
│   │   ├── dcgan.py
│   │   ├── download.py
│   │   ├── inpaint_main.py
│   │   ├── inpaint_model.py
│   │   ├── inpaint_solver.py
│   │   ├── main.py
│   │   ├── solver.py
│   │   ├── mask_generator.py
│   │   ├── poissonblending.py
│   │   ├── tensorflow_utils.py
│   │   └── utils.py
│   Data
│   ├── celebA
│   │   ├── train
│   │   └── val
│   ├── svhn
│   │   ├── test_32x32.mat
│   │   └── train_32x32.mat

src: source codes of the Semantic-image-inpainting

Implementation Details

We need two sperate stages to utilize semantic image inpainting model.

Same generator and discriminator networks of the DCGAN are used as described in Alec Radford's paper, except that batch normalization of training mode is used in training and test mode that we found to get more stalbe results. Semantic image inpainting model is implemented as moodoki's semantic_image_inpainting. Some bugs and different implementations of the original paper are fixed.

Stage 1: Training DCGAN

Use main.py to train a DCGAN network. Example usage:

python main.py --is_train=true

Evaluate DCGAN

Use main.py to evaluate a DCGAN network. Example usage:

python main.py --is_train=false --load_model=folder/you/wish/to/test/e.g./20180704-1746

Please refer to the above arguments.

Stage 2: Utilize Semantic-image-inpainting Model

Use inpaint_main.py to utilize semantic-image-inpainting model. Example usage:

python inpaint_main.py --dataset=celebA \
    --load_model=DCGAN/model/you/want/to/use/e.g./20180704-1746 \
    --mask_type=center

Loss for Optimizing Latent Vector

<p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43247920-201307b8-90f1-11e8-8b22-8ecb3ebc734d.png", width=800> </p> <p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43247983-560194d4-90f1-11e8-9d8f-a7435cb21885.png", width=800> </p> <p align='center'> <img src="https://user-images.githubusercontent.com/37034031/43247998-677ea8c8-90f1-11e8-9564-12ffa3117b9b.png", width=800> </p>

Citation

  @misc{chengbinjin2018semantic-image-inpainting,
    author = {Cheng-Bin Jin},
    title = {semantic-image-inpainting},
    year = {2018},
    howpublished = {\url{https://github.com/ChengBinJin/semantic-image-inpainting}},
    note = {commit xxxxxxx}
  }

Attributions/Thanks

License

Copyright (c) 2018 Cheng-Bin Jin. Contact me for commercial use (or rather any use that is not academic research) (email: sbkim0407@gmail.com). Free for research use, as long as proper attribution is given and this copyright notice is retained.

Related Projects