Home

Awesome

TAC-GAN

This repository is by Yanwu Xu and contains the PyTorch source code to reproduce the experiments in our NeurIPS2019 paper Twin Auxiliary Classifiers GAN by Mingming Gong*, Yanwu Xu*, Chunyuan Li, Kun Zhang, and Kayhan Batmanghelich†

<p align="center"> <img width="75%" height="%75" src="https://github.com/batmanlab/twin_ac/blob/master/figure/tac_gan_scheme.png"> </p>

Visualization on Mixture of Gaussian

Visualize the biased reconstruction of AC-GAN and our TAC-GAN correction to this as well as Projection-GAN.

OriginalTACACProjection

Experiments on real data

This implementation on cifar100 and Imagenet100 is based on pytorch of BigGAN implementation To prepare for the env for running our code. cd the repository and run

conda install pytorch torchvision cudatoolkit=10.1 -c pytorch (This should work directly)

conda env create -f environment.yml (Alternative)

Overlap MNIST with '0' '1' and '0' '2'

<p align="center"> <img width="75%" height="%75" src="https://github.com/batmanlab/twin_ac/blob/master/figure/overlap_MNIST.png"> </p>

Cifar100 generation evaluation with Inception Score, FID and LPIPS, 32 resolution

<p align="center"> <img width="75%" height="%75" src="https://github.com/batmanlab/twin_ac/blob/master/figure/cifar100.png"> </p>

IMANGENET1000 generated images, 128 resolution

<p align="center"> <img width="75%" height="%75" src="https://github.com/batmanlab/twin_ac/blob/master/figure/part_of_imagenet.png"> </p>

To get the idea of our implementation and run the simplified version of our method, do the follow:

├── Twin_AC_simplified - easy read implementation.
   ├── main.py.py - Script to run 1-D MOG. - running file

To replicate our results of our NeurIPS paper, do the follow:

Simulation on MOG toy data

MOG
├── MOG_visualization.ipynb - Notebook to run 1-D MOG.
├── One_Dimensional_MOG.py - Script to run 1-D MOG.
└── Two_Dimensional_MOG.py - Script to run 2-D MOG.

Experiments on real data

For the real data experiments, the code is based on pytorch BigGAN.

Training data preparation

FIrstly, you need to transfer imagenet1000 image to HDF5 file, follow the command of pytorch BigGAN implementation

Running on Cifar100 and Imagenet1000

├── TAC-BigGAN
   ├── scripts
      ├── twin_ac_launch_cifar100_ema.sh - Script to run TAC-GAN on cifar100
      ├── twin_ac_launch_BigGAN_ch64_bs256x8.sh - Script to run TAC-GAN on Imagenet1000

if you want to change the weight of auxiliary classifier, you can modify the '--AC_weight' arguments in 'twin_ac_launch_cifar100_ema.sh' script. The same for AC-GAN and Projection-GAN, change script to 'ac_launch_cifar100_ema.sh' and 'projection_launch_cifar100_ema.sh' respectively.

Citation

@incollection{NIPS2019_8414,
title = {Twin Auxilary Classifiers GAN},
author = {Gong, Mingming and Xu, Yanwu and Li, Chunyuan and Zhang, Kun and Batmanghelich, Kayhan},
booktitle = {Advances in Neural Information Processing Systems 32},
editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
pages = {1330--1339},
year = {2019},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8414-twin-auxilary-classifiers-gan.pdf}
}

Acknowledgments

This work was partially supported by NIH Award Number 1R01HL141813-01, NSF 1839332 Tripod+X, and SAP SE. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research. We were also grateful for the computational resources provided by Pittsburgh SuperComputing grant number TG-ASC170024.