Awesome
AC-GAN ( Auxiliary Classifier GAN )
A tensorflow implementation of Augustus Odena (at Google Brains) et al's "Conditional Image Synthesis With Auxiliary Classifier GANs" paper )
I've already implemented this kind of GAN structure last Sep. (See : Supervised InfoGAN tensorflow implementation)
I said that I had added supervised loss(in this paper auxiliary classifier) to InfoGAN structure to achieve the consistency of generated categories and training stability.
And the result was promising. It helped the model to generate consistent categories and to converge fast compared to InfoGAN.
The paper which propose the same architecture by researchers at Google Brains was published last Oct.
They've provided many results from diverse data set and tried to generate big images and explained why this architecture works plausibly.
I think this paper is awesome. But they did not provide source codes and I re-introduce this codes.
AC-GAN architecture( from the paper. )
<p align="center"> <img src="https://raw.githubusercontent.com/buriburisuri/ac-gan/master/png/architecture.png" width="800"/> </p>Dependencies
- tensorflow >= rc0.11
- sugartensor >= 0.0.1.7
Training the network
Execute
<pre><code> python train.py </code></pre>to train the network. You can see the result ckpt files and log files in the 'asset/train' directory. Launch tensorboard --logdir asset/train/log to monitor training process.
Generating image
Execute
<pre><code> python generate.py </code></pre>to generate sample image. The 'sample.png' file will be generated in the 'asset/train' directory.
Generated image sample
This image was generated by AC-GAN network.
<p align="center"> <img src="https://raw.githubusercontent.com/buriburisuri/ac-gan/master/png/fake.png" width="800"/> </p>And this image was generated with categorical auxiliary classifier.
<p align="center"> <img src="https://raw.githubusercontent.com/buriburisuri/ac-gan/master/png/sample.png" width="800"/> </p>And this image was generated by continuous auxiliary classifier. You can see the rotation change along the X axis and the thickness change along the Y axis.
<p align="center"> <img src="https://raw.githubusercontent.com/buriburisuri/ac-gan/master/png/sample1.png" width="800"/> </p>The following image is the loss chart in the training process. This looks more stable than my original GAN implementation.
<p align="center"> <img src="https://raw.githubusercontent.com/buriburisuri/ac-gan/master/png/train.png" width="640"/> </p>Other resources
- Original GAN tensorflow implementation
- EBGAN tensorflow implementation
- Supervised InfoGAN tensorflow implementation
- SRGAN tensorflow implementation
- Timeseries gan tensorflow implementation
Authors
Namju Kim (buriburisuri@gmail.com) at Jamonglabs Co., Ltd.