Home

Awesome

AdvStyle - Official PyTorch Implementation

Paper | Supp

<img src='img/teaser.png' >

Discovering Interpretable Latent Space Directions of GANs Beyond Binary Attributes.

Huiting Yang, Liangyu Chai, Qiang Wen, Shuang Zhao, Zixun Sun, Shengfeng He

In CVPR 2021

Prerequisites

Setup

git clone https://github.com/BERYLSHEEP/AdvStyle.git

Testing Demo

The following commands are examples of testing the learned direction:

# stylegan ffhq
python new_demo.py manipulate_test supermodel \
				--gan_model stylegan_ffhq --resolution 1024 --latent_type z
	
# stylegan anime
python new_demo.py manipulate_test maruko \
				--gan_model stylegan_anime --resolution 512 --latent_type z

if you want to manipulate multi attributes simultaneously, you can list all the attributes on the command as follow:

# multi attribute manipulation
python new_demo.py manipulate_test blonde,open_mouth \
				--gan_model stylegan_anime --resolution 512 --latent_type z

If you want to specific the manipulated latent code, you can set the --noise_path option:

# specific latent code
python new_demo.py manipulate_test maruko \
				--gan_model stylegan_anime --resolution 512 --latent_type z \
				--noise_path ./noise/maruko/2.npy

Results are saved to result/{attribute}

The attribute names are the file names in the boundaries directory.

Interactive Demo

<img src="img/app.png">

Related Project

Citation

If you use this code for your research, please cite our paper:

@InProceedings{Yang_2021_CVPR,
    author    = {Yang, Huiting and Chai, Liangyu and Wen, Qiang and Zhao, Shuang and Sun, Zixun and He, Shengfeng},
    title     = {Discovering Interpretable Latent Space Directions of GANs Beyond Binary Attributes},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {12177-12185}
}