Awesome
Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs
In this work, we propose a framework HijackGAN, which enables non-linear latent space traversal and gain high-level controls, e.g., attributes, head poses, and landmarks, over unconditional image generation GANs in a fully black-box setting. It opens up the possibility of reusing GANs while raising concerns about unintended usage.
[Paper (CVPR 2021)][Project Page]
Prerequisites
Install required packages
pip install -r requirements.txt
Download pretrained GANs
Download the CelebAHQ pretrained weights of ProgressiveGAN [paper][code] and StyleGAN [paper][code], and then put those weights in ./models/pretrain
. For example,
pretrain/
├── Pretrained_Models_Should_Be_Placed_Here
├── karras2018iclr-celebahq-1024x1024.pkl
├── karras2019stylegan-celebahq-1024x1024.pkl
├── pggan_celebahq_z.pt
├── stylegan_celebahq_z.pt
├── stylegan_headpose_z_dp.pt
└── stylegan_landmark_z.pt
Quick Start
Specify number of images to edit, a model to generate images, some parameters for editting.
LATENT_CODE_NUM=1
python edit.py \
-m pggan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/stylegan_celebahq_eyeglasses \
--step_size 0.2 \
--steps 40 \
--attr_index 0 \
--task attribute \
--method ours
Usage
Important: For different given images (initial points), different step size and steps may be considered. In the following examples, we provide the parameters used in our paper. One could adjust them for better performance.
Specify Number of Samples
LATENT_CODE_NUM=1
Unconditional Modification
python edit.py \
-m pggan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/stylegan_celebahq_smile_editing \
--step_size 0.2 \
--steps 40 \
--attr_index 0\
--task attribute
Conditional Modification
python edit.py \
-m pggan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/stylegan_celebahq_smile_editing \
--step_size 0.2 \
--steps 40 \
--attr_index 0\
--condition\
-i codes/pggan_cond/age.npy
--task attribute
Head pose
Pitch
python edit.py \
-m stylegan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/ \
--task head_pose \
--method ours \
--step_size 0.01 \
--steps 2000 \
--attr_index 1\
--condition\
--direction -1 \
--demo
Yaw
python edit.py \
-m stylegan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/ \
--task head_pose \
--method ours \
--step_size 0.1 \
--steps 200 \
--attr_index 0\
--condition\
--direction 1\
--demo
Landmarks
Parameters for reference: (attr_index, step_size, steps) (4: 0.005 400) (5: 0.01 100), (6: 0.1 200), (8 0.1 200)
CUDA_VISIBLE_DEVICES=0 python edit.py \
-m stylegan_celebahq \
-b boundaries/ \
-n "$LATENT_CODE_NUM" \
-o results/ \
--task landmark \
--method ours \
--step_size 0.1 \
--steps 200 \
--attr_index 6\
--condition\
--direction 1 \
--demo
Generate Balanced Data
This a templeate showing how we generated balanced data for attribute manipulation (16 attributes in our internal experiments). You can modify it to fit your task better.
Please first refer to here and replace YOUR_TASK_MODEL
with your own classification model, and then run:
NUM=500000
CUDA_VISIBLE_DEVICES=0 python generate_balanced_data.py -m stylegan_celebahq \
-o ./generated_data -K ./generated_data/indices.pkl -n "$NUM" -SI 0 --no_generated_imgs
Evaluations
TO-DO
- Basic usage
- Prerequisites
- How to generate data
- How to evaluate
Acknowledgment
This code is built upon InterfaceGAN