Home

Awesome

Short notice for visiters (wrote 2020.10.27)

If you get interested in this repository, I recommend you to see Nvidia's official Pytorch implementation.
https://nv-adlr.github.io/publication/partialconv-inpainting

chainer-partial_convolution_image_inpainting

Reproduction of Nvidia image inpainting paper "Image Inpainting for Irregular Holes Using Partial Convolutions" https://arxiv.org/abs/1804.07723

1,000 iteration results (completion, output, mask) "completion" represents the input images whose masked pixels are replaced with the corresonded pixels of the output images <img src="imgs/iter_1000.jpg" alt="iter_1000.jpg" title="iter_1000.jpg" width="768" height="512">

10,000 iteration results (completion, output, mask)
<img src="imgs/iter_10000.jpg" alt="iter_10000.jpg" title="iter_10000.jpg" width="768" height="512">

100,000 iteration results (completion, output, mask)
<img src="imgs/iter_100000.jpg" alt="iter_100000.jpg" title="iter_100000.jpg" width="768" height="512">

Environment

How to try

Download dataset (place2)

Place2

Set dataset path

Edit common/paths.py

train_place2 = "/yourpath/place2/data_256"
val_place2 = "/yourpath/place2/val_256"
test_place2 = "/yourpath/test_256"

Preprocessing

In this implementation, masks are automatically generated in advance.

python generate_windows.py image_size generate_num

"image_size" indicates image size of masks.
"generate_num" indicates the number of masks to generate.

Default implementation uses image_size=256 and generate_num=1000.

#To try default setting
python generate_windows.py 256 1000

Note that original paper uses 512x512 image and generate mask with different way.

Run training

python train.py -g 0 

-g represents gpu option.(utilize gpu of No.0)

Difference from original paper

Firstly, check implementation FAQ

  1. C(0)=0 in first implementation (already fix in latest version)
  2. Masks are generated using random walk by generate_window.py
  3. To use chainer VGG pre-traied model, I re-scaled input of the model. See updater.vgg_extract. It includes cropping, so styleloss in outside of crop box is ignored.)
  4. Padding is to make scale of height and width input:output=2:1 in encoder stage.
  5. I use chainer.functions.unpooling_2d for upsampling. (you can replace it with chainer.functions.upsampling_2d)

other differences:

Acknowledgement

This repository utilizes the codes of following impressive repositories