Home

Awesome

<p align="center"> <img width="60%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/studiogan_logo.jpg" /> </p>

StudioGAN is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea.

Moreover, StudioGAN provides an unprecedented-scale benchmark for generative models. The benchmark includes results from GANs (BigGAN-Deep, StyleGAN-XL), auto-regressive models (MaskGIT, RQ-Transformer), and Diffusion models (LSGM++, CLD-SGM, ADM-G-U).

News

Release Notes (v.0.4.0)

Features

Implemented GANs

MethodVenueArchitectureGCDCLossEMA
DCGANarXiv'15DCGAN/ResNetGAN<sup>1</sup>N/AN/AVanillaFalse
InfoGANNIPS'16DCGAN/ResNetGAN<sup>1</sup>N/AN/AVanillaFalse
LSGANICCV'17DCGAN/ResNetGAN<sup>1</sup>N/AN/ALeast SqaureFalse
GGANarXiv'17DCGAN/ResNetGAN<sup>1</sup>N/AN/AHingeFalse
WGAN-WCICLR'17ResNetGANN/AN/AWassersteinFalse
WGAN-GPNIPS'17ResNetGANN/AN/AWassersteinFalse
WGAN-DRAarXiv'17ResNetGANN/AN/AWassersteinFalse
ACGAN-Mod<sup>2</sup>-ResNetGANcBNACHingeFalse
PDGANICLR'18ResNetGANcBNPDHingeFalse
SNGANICLR'18ResNetGANcBNPDHingeFalse
SAGANICML'19ResNetGANcBNPDHingeFalse
TACGANNeurips'19BigGANcBNTACHingeTrue
LGANICML'19ResNetGANN/AN/AVanillaFalse
Unconditional BigGANICLR'19BigGANN/AN/AHingeTrue
BigGANICLR'19BigGANcBNPDHingeTrue
BigGAN-Deep-CompareGANICLR'19BigGAN-Deep CompareGANcBNPDHingeTrue
BigGAN-Deep-StudioGAN-BigGAN-Deep StudioGANcBNPDHingeTrue
StyleGAN2CVPR' 20StyleGAN2cAdaINSPDLogisticTrue
CRGANICLR'20BigGANcBNPDHingeTrue
ICRGANAAAI'21BigGANcBNPDHingeTrue
LOGANarXiv'19ResNetGANcBNPDHingeTrue
ContraGANNeurips'20BigGANcBN2CHingeTrue
MHGANWACV'21BigGANcBNMHMHTrue
BigGAN + DiffAugmentNeurips'20BigGANcBNPDHingeTrue
StyleGAN2 + ADANeurips'20StyleGAN2cAdaINSPDLogisticTrue
BigGAN + LeCamCVPR'2021BigGANcBNPDHingeTrue
ReACGANNeurips'21BigGANcBND2D-CEHingeTrue
StyleGAN2 + APANeurips'21StyleGAN2cAdaINSPDLogisticTrue
StyleGAN3-tNeurips'21StyleGAN3cAaINSPDLogisticTrue
StyleGAN3-rNeurips'21StyleGAN3cAaINSPDLogisticTrue
ADCGANICML'22BigGANcBNADCHingeTrue

GC/DC indicates the way how we inject label information to the Generator or Discriminator.

EMA: Exponential Moving Average update to the generator. cBN : conditional Batch Normalization. cAdaIN: Conditional version of Adaptive Instance Normalization. AC : Auxiliary Classifier. PD : Projection Discriminator. TAC: Twin Auxiliary Classifier. SPD : Modified PD for StyleGAN. 2C : Conditional Contrastive loss. MH : Multi-Hinge loss. ADC : Auxiliary Discriminative Classifier. D2D-CE : Data-to-Data Cross-Entropy.

Evaluation Metrics

MethodVenueArchitecture
Inception Score (IS)Neurips'16InceptionV3
Frechet Inception Distance (FID)Neurips'17InceptionV3
Improved Precision & RecallNeurips'19InceptionV3
Classifier Accuracy Score (CAS)Neurips'19InceptionV3
Density & CoverageICML'20InceptionV3
Intra-class FID-InceptionV3
SwAV FIDICLR'21SwAV
Clean metrics (IS, FID, PRDC)CVPR'22InceptionV3
Architecture-friendly metrics (IS, FID, PRDC)arXiv'22Not limited to InceptionV3

Training and Inference Techniques

MethodVenueTarget Architecture
FreezeDCVPRW'20Except for StyleGAN2
Top-K TrainingNeurips'2020-
DDLSNeurips'2020-
SeFaCVPR'2021BigGAN

Reproducibility

We check the reproducibility of GANs implemented in StudioGAN by comparing IS and FID with the original papers. We identify our platform successfully reproduces most of representative GANs except for PD-GAN, ACGAN, LOGAN, SAGAN, and BigGAN-Deep. FQ means Flickr-Faces-HQ Dataset (FFHQ). The resolutions of ImageNet, AFHQv2, and FQ datasets are 128, 512, and 1024, respectively.

<p align="center"> <img width="50%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/Reproducibility_.png" /> </p>

Requirements

First, install PyTorch meeting your environment (at least 1.7):

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

Then, use the following command to install the rest of the libraries:

pip install tqdm ninja h5py kornia matplotlib pandas sklearn scipy seaborn wandb PyYaml click requests pyspng imageio-ffmpeg timm

With docker, you can use (Updated 14/DEC/2022):

docker pull alex4727/experiment:pytorch113_cuda116

This is our command to make a container named "StudioGAN".

docker run -it --gpus all --shm-size 128g --name StudioGAN -v /path_to_your_folders:/root/code --workdir /root/code alex4727/experiment:pytorch113_cuda116 /bin/zsh

If your nvidia driver version doesn't satisfy requirements, you can try adding below to above command.

--env NVIDIA_DISABLE_REQUIRE=true

Dataset

data
└── ImageNet, Tiny_ImageNet, Baby ImageNet, Papa ImageNet, or Grandpa ImageNet
    ├── train
    │   ├── cls0
    │   │   ├── train0.png
    │   │   ├── train1.png
    │   │   └── ...
    │   ├── cls1
    │   └── ...
    └── valid
        ├── cls0
        │   ├── valid0.png
        │   ├── valid1.png
        │   └── ...
        ├── cls1
        └── ...

Quick Start

Before starting, users should login wandb using their personal API key.

wandb login PERSONAL_API_KEY

From release 0.3.0, you can now define which evaluation metrics to use through -metrics option. Not specifying option defaults to calculating FID only. i.e. -metrics is fid calculates only IS and FID and -metrics none skips evaluation.

CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -metrics is fid prdc -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -metrics is fid prdc --pre_resizer lanczos --post_resizer clean -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH
export MASTER_ADDR="localhost"
export MASTER_PORT=2222
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -metrics none -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -DDP -sync_bn -mpc 

Try python3 src/main.py to see available options.

Supported Training/Testing Techniques

Analyzing Generated Images

StudioGAN supports Image visualization, K-nearest neighbor analysis, Linear interpolation, Frequency analysis, TSNE analysis, and Semantic factorization. All results will be saved in SAVE_DIR/figures/RUN_NAME/*.png.

CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -v -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR
<p align="center"> <img width="95%" src="https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/master/docs/figures/StudioGAN_generated_images.png" /> </p>
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -knn -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
<p align="center"> <img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/knn_1.png" /> </p>
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -itp -cfg CONFIG_PATH -ckpt CKPT -save SAVE_DIR
<p align="center"> <img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/interpolated_images.png" /> </p>
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -fa -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
<p align="center"> <img width="60%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/diff_spectrum1.png" /> </p>
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -tsne -cfg CONFIG_PATH -ckpt CKPT -data DATA_PATH -save SAVE_PATH
<p align="center"> <img width="80%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/TSNE_results.png" /> </p>
CUDA_VISIBLE_DEVICES=0,...,N python3 src/main.py -sefa -sefa_axis SEFA_AXIS -sefa_max SEFA_MAX -cfg CONFIG_PATH -ckpt CKPT -save SAVE_PATH
<p align="center"> <img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/fox.png" /> </p>

Training GANs

StudioGAN supports the training of 30 representative GANs from DCGAN to StyleGAN3-r.

We used different scripts depending on the dataset and model, and it is as follows:

CIFAR10

CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -hdf5 -l -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer "friendly" --eval_backbone "InceptionV3_tf"

CIFAR10 using StyleGAN2/3

CUDA_VISIBLE_DEVICES=0 python3 src/main.py -t -hdf5 -l -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --post_resizer "friendly" --eval_backbone "InceptionV3_tf"

Baby/Papa/Grandpa ImageNet and ImageNet

CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -hdf5 -l -sync_bn -std_stat -std_max STD_MAX -std_step STD_STEP -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"

AFHQv2

export MASTER_ADDR="localhost"
export MASTER_PORT=8888
CUDA_VISIBLE_DEVICES=0,1,2,3 python3 src/main.py -t -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"

FFHQ

export MASTER_ADDR="localhost"
export MASTER_PORT=8888
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 src/main.py -t -metrics is fid prdc -ref "train" -cfg CONFIG_PATH -data DATA_PATH -save SAVE_PATH -mpc --pre_resizer "lanczos" --post_resizer "friendly" --eval_backbone "InceptionV3_tf"

Metrics

StudioGAN supports Inception Score, Frechet Inception Distance, Improved Precision and Recall, Density and Coverage, Intra-Class FID, Classifier Accuracy Score. Users can get Intra-Class FID, Classifier Accuracy Score scores using -iFID, -GAN_train, and -GAN_test options, respectively.

Users can change the evaluation backbone from InceptionV3 to ResNet50, SwAV, DINO, or Swin Transformer using --eval_backbone ResNet50_torch, SwAV_torch, DINO_torch, or Swin-T_torch option.

In addition, Users can calculate metrics with clean- or architecture-friendly resizer using --post_resizer clean or friendly option.

1. Inception Score (IS)

Inception Score (IS) is a metric to measure how much GAN generates high-fidelity and diverse images. Calculating IS requires the pre-trained Inception-V3 network. Note that we do not split a dataset into ten folds to calculate IS ten times.

2. Frechet Inception Distance (FID)

FID is a widely used metric to evaluate the performance of a GAN model. Calculating FID requires the pre-trained Inception-V3 network, and modern approaches use Tensorflow-based FID. StudioGAN utilizes the PyTorch-based FID to test GAN models in the same PyTorch environment. We show that the PyTorch based FID implementation provides almost the same results with the TensorFlow implementation (See Appendix F of ContraGAN paper).

3. Improved Precision and Recall (Prc, Rec)

Improved precision and recall are developed to make up for the shortcomings of the precision and recall. Like IS, FID, calculating improved precision and recall requires the pre-trained Inception-V3 model. StudioGAN uses the PyTorch implementation provided by developers of density and coverage scores.

4. Density and Coverage (Dns, Cvg)

Density and coverage metrics can estimate the fidelity and diversity of generated images using the pre-trained Inception-V3 model. The metrics are known to be robust to outliers, and they can detect identical real and fake distributions. StudioGAN uses the authors' official PyTorch implementation, and StudioGAN follows the author's suggestion for hyperparameter selection.

Benchmark

※ We always welcome your contribution if you find any wrong implementation, bug, and misreported score.

We report the best IS, FID, Improved Precision & Recall, and Density & Coverage of GANs.

To download all checkpoints reported in StudioGAN, Please click here (Hugging face Hub).

You can evaluate the checkpoint by adding -ckpt CKPT_PATH option with the corresponding configuration path -cfg CORRESPONDING_CONFIG_PATH.

1. GANs from StudioGAN

The resolutions of CIFAR10, Baby ImageNet, Papa ImageNet, Grandpa ImageNet, ImageNet, AFHQv2, and FQ are 32, 64, 64, 64, 128, 512, and 1024, respectively.

We use the same number of generated images as the training images for Frechet Inception Distance (FID), Precision, Recall, Density, and Coverage calculation. For the experiments using Baby/Papa/Grandpa ImageNet and ImageNet, we exceptionally use 50k fake images against a complete training set as real images.

All features and moments of reference datasets can be downloaded via features and moments.

<p align="center"> <img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/StudioGAN_Benchmark_.png"/> </p>

2. Other generative models

The resolutions of ImageNet-128 and ImageNet 256 are 128 and 256, respectively.

All images used for Benchmark can be downloaded via One Drive (will be uploaded soon).

<p align="center"> <img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/Other_Benchmark.png"/> </p>

Evaluating pre-saved image folders

CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --dset1 DSET1 --dset2 DSET2
CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --dset1_feats DSET1_FEATS --dset1_moments DSET1_MOMENTS --dset2 DSET2
export MASTER_ADDR="localhost"
export MASTER_PORT=2222
CUDA_VISIBLE_DEVICES=0,...,N python3 src/evaluate.py -metrics is fid prdc --post_resizer friendly --dset1 DSET1 --dset2 DSET2 -DDP

StudioGAN thanks the following Repos for the code sharing

[MIT license] Synchronized BatchNorm: https://github.com/vacancy/Synchronized-BatchNorm-PyTorch

[MIT license] Self-Attention module: https://github.com/voletiv/self-attention-GAN-pytorch

[MIT license] DiffAugment: https://github.com/mit-han-lab/data-efficient-gans

[MIT_license] PyTorch Improved Precision and Recall: https://github.com/clovaai/generative-evaluation-prdc

[MIT_license] PyTorch Density and Coverage: https://github.com/clovaai/generative-evaluation-prdc

[MIT license] PyTorch clean-FID: https://github.com/GaParmar/clean-fid

[NVIDIA source code license] StyleGAN2: https://github.com/NVlabs/stylegan2

[NVIDIA source code license] Adaptive Discriminator Augmentation: https://github.com/NVlabs/stylegan2

[Apache License] Pytorch FID: https://github.com/mseitzer/pytorch-fid

License

PyTorch-StudioGAN is an open-source library under the MIT license (MIT). However, portions of the library are avaiiable under distinct license terms: StyleGAN2, StyleGAN2-ADA, and StyleGAN3 are licensed under NVIDIA source code license, and PyTorch-FID is licensed under Apache License.

Citation

StudioGAN is established for the following research projects. Please cite our work if you use StudioGAN.

@article{kang2023StudioGANpami,
  title   = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}},
  author  = {MinGuk Kang and Joonghyuk Shin and Jaesik Park},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year    = {2023}
}
@inproceedings{kang2021ReACGAN,
  title   = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}},
  author  = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park},
  journal = {Conference on Neural Information Processing Systems (NeurIPS)},
  year    = {2021}
}
@inproceedings{kang2020ContraGAN,
  title   = {{ContraGAN: Contrastive Learning for Conditional Image Generation}},
  author  = {Minguk Kang and Jaesik Park},
  journal = {Conference on Neural Information Processing Systems (NeurIPS)},
  year    = {2020}
}

<a name="footnote_1">[1]</a> Experiments on Tiny ImageNet are conducted using the ResNet architecture instead of CNN.

<a name="footnote_2">[2]</a> Our re-implementation of ACGAN (ICML'17) with slight modifications, which bring strong performance enhancement for the experiment using CIFAR10.