Awesome
BigGAN-tensorflow
Reimplementation of the Paper: Large Scale GAN Training for High Fidelity Natural Image Synthesis
Introduction
Simply implement the great paper (BigGAN)Large Scale GAN Training for High Fidelity Natural Image Synthesis, which can generate very realistic images. However, due to my poor device :sob:, I just train the image of size 32x32 of cifar-10 and the image of size 64x64 of Imagenet64. By the way, the training procedure is really slow.
From the paper:
Dataset
- Image 32x32: cifar-10: http://www.cs.toronto.edu/~kriz/cifar-10-matlab.tar.gz
- Image 64x64: ImageNet64: https://drive.google.com/open?id=1uN9O69eeqJEPV797d05ZuUmJ23kGVtfU
Just download the dataset, and put them into the folder 'dataset'
Architecture
Results
32x32 Cifar-10
Configuration:
Training iteration: 100,000 Truncation threshold: 1.0
Discriminator | Generator | |
---|---|---|
Update step | 2 | 1 |
Learning rate | 4e-4 | 1e-4 |
Orthogonal reg | :heavy_check_mark: | :heavy_check_mark: |
Orthogonal init | :heavy_check_mark: | :heavy_check_mark: |
Hierarchical latent | :x: | :heavy_check_mark: |
Projection batchnorm | :heavy_check_mark: | :x: |
Truncation threshold | :x: | :heavy_check_mark: |
Generation:
Truncation threshold = 1.0, A little mode collapse (truncation threshold is too small).
Truncation threshold = 2.0.
car2plane | ship2horse | cat2bird |
---|---|---|
64x64 ImageNet
Configuration:
Training iteration: 100,000
Discriminator | Generator | |
---|---|---|
Update step | 2 | 1 |
Learning rate | 4e-4 | 1e-4 |
Orthogonal reg | :heavy_check_mark: | :heavy_check_mark: |
Orthogonal init | :heavy_check_mark: | :heavy_check_mark: |
Hierarchical latent | :x: | :heavy_check_mark: |
Projection batchnorm | :heavy_check_mark: | :x: |
Truncation threshold | :x: | :heavy_check_mark: |