Home

Awesome

GAN steerability

Project Page | Paper

<img src='img/teaser.jpeg' width=600>

On the "steerability" of generative adversarial networks.
Ali Jahanian*, Lucy Chai*, Phillip Isola

Prerequisites

Table of Contents:<br>

  1. Setup<br>
  2. Visualizations - plotting image panels, videos, and distributions<br>
  3. Training - pipeline for training your own walks<br>
  4. Notebooks - some jupyter notebooks, good place to start for trying your own transformations<br>
  5. PyTorch/Colab Demo - pytorch implementation in a colab notebook<br>
<a name="setup"/>

Setup

git clone https://github.com/ali-design/gan_steerability.git
conda env create -f environment.yml
bash resources/download_resources.sh
<a name="visualizations"/>

Visualizations

Plotting image panels: <br> <img src='img/panel.png' width=600>

python vis_image.py \
	models_pretrained/biggan_zoom_linear_lr0.0001_l2/model_20000_final.ckpt \
	models_pretrained/biggan_zoom_linear_lr0.0001_l2/opt.yml \
	--gpu 0 --num_samples 50 --noise_seed 20 --truncation 0.5 --category 207

python vis_image.py \
        models_pretrained/stylegan_color_linear_lr0.0001_l2_cats_w/model_2000_final.ckpt \
        models_pretrained/stylegan_color_linear_lr0.0001_l2_cats_w/opt.yml \
        --gpu 1 --num_samples 10 --noise_seed 20 
<br>

To make a videos: <br> <img src='img/cats.gif' width=300><img src='img/color.gif' width=300>

python vis_video.py [CHECKPOINT] [CONFIG] --gpu [GPU] --noise_seed [SEED] --sample [SAMPLE]

python vis_video.py models_pretrained/biggan_color_linear_lr0.001_l2/model_20000_final.ckpt \
	models_pretrained/biggan_color_linear_lr0.001_l2/opt.yml  --gpu 0 --sample 10 \
	--noise_seed 20 --truncation 0.5 --category 538 --min_alpha -1 --max_alpha 0

To draw distributions: <br> <img src='img/distribution.png' width=300>

To draw distributions, you will need to have downloaded the object detector through resources/download_resources.sh (for objects) or installed dlib through environment.yml (for faces).

python vis_distribution.py [CHECKPOINT] [CONFIG] --gpu [GPU]

python vis_distribution.py models_pretrained/biggan_shiftx_linear_lr0.001_l2/model_20000_final.ckpt \
	models_pretrained/biggan_shiftx_linear_lr0.001_l2/opt.yml  --gpu 0
<a name="training"/>

Training walks

# train a biggan NN walk for shiftx with lpips loss
python train.py --model biggan --transform shiftx --num_samples 20000 --learning_rate 0.0001 \
	--walk_type NNz --loss lpips --gpu 0 --eps 25 --num_steps 5

# train a stylegan linear walk with l2 loss using the w latent space
python train.py --model stylegan --transform color --num_samples 2000 --learning_rate 0.0001 \
	--walk_type linear --loss l2 --gpu 0 --latent w --model_save_freq 100

# train a pgan linear walk with l2 loss
python train.py --model pgan --transform color --num_samples 60000 --learning_rate 0.0001 \
	--walk_type linear --loss l2 --gpu 0 --dset celebahq --model_save_freq 1000
python train.py --config_file config/biggan_color_linear.yml
<a name="notebooks"/>

Notebooks

source activate gan_steerability
python -m ipykernel install --user --name gan_steerability
<a name="pytorch"/>

PyTorch

Citation

If you use this code for your research, please cite our paper:

<!-- ``` @article{gansteerability, title={On the "steerability" of generative adversarial networks}, author={Jahanian, Ali and Chai, Lucy and Isola, Phillip}, journal={arXiv preprint arXiv:1907.07171}, year={2019} } ``` -->
@inproceedings{gansteerability,
  title={On the "steerability" of generative adversarial networks},
  author={Jahanian, Ali and Chai, Lucy and Isola, Phillip},
  booktitle={International Conference on Learning Representations},
  year={2020}
 }