Awesome
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data<br/>(CVPR 2022)
Potentials of primitive shapes for representing things. We only use a line, ellipse, and rectangle to express a cat and a temple. These examples motivate us to develop Primitives, which generates the data by a simple composition of the shapes.
Official pytorch implementation of "Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data"
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data
Kyungjune Baek and Hyunjung ShimYonsei University
Absract Transfer learning for GANs successfully improves generation performance under low-shot regimes. However, existing studies show that the pretrained model using a single benchmark dataset is not generalized to various target datasets. More importantly, the pretrained model can be vulnerable to copyright or privacy risks as membership inference attack advances. To resolve both issues, we propose an effective and unbiased data synthesizer, namely Primitives-PS, inspired by the generic characteristics of natural images. Specifically, we utilize 1) the generic statistics on the frequency magnitude spectrum, 2) the elementary shape (i.e., image composition via elementary shapes) for representing the structure information, and 3) the existence of saliency as prior. Since our synthesizer only considers the generic properties of natural images, the single model pretrained on our dataset can be consistently transferred to various target datasets, and even outperforms the previous methods pretrained with the natural images in terms of Fr'echet inception distance. Extensive analysis, ablation study, and evaluations demonstrate that each component of our data synthesizer is effective, and provide insights on the desirable nature of the pretrained model for the transferability of GANs.
Requirement
Environment
For the easy construction of environment, please use the docker image.
- Replace $DOCKER_CONTAINER_NAME, $LOCAL_MAPPING_DIRECTORY, and $DOCKER_MAPPING_DIRECTORY to your own name and directories.
nvidia-docker run -it --entrypoint /bin/bash --shm-size 96g --name $DOCKER_CONTAINER_NAME -v $LOCAL_MAPPING_DIRECTORY:$DOCKER_MAPPING_DIRECTORY bkjbkj12/stylegan2_ada-pytorch1.8:1.0
nvidia-docker start $DOCKER_CONTAINER_NAME
nvidia-docker exec -it $DOCKER_CONTAINER_NAME bash
The image is built upon nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04.
Then, go to the directory containing the source code
Dataset
The low-shot datasets are from DiffAug repository.
Pretrained checkpoint
Please download the source model (pretrained model) below. (Mainly used Primitives-PS)
Hardware
- Mainly tested on Titan XP (12GB), V100 (32GB) and A6000 (48GB).
How to Run (Quick Start)
Pretraining To change the type of the pretraining dataset, comment out L231 and comment in these lines.
The file "noise.zip" is not required. (Just running the script will work well.)
CUDA_VISIBLE_DEVICES=$GPU_NUMBER python train.py --outdir=$OUTPUT_DIR --data=./data/noise.zip --gpus=1
Finetuning Change or locate the pretrained pkl file into the directory specified at the code.
CUDA_VISIBLE_DEVICES=$GPU_NUMBER python train.py --outdir=$OUTPUT_DIR --gpus=1 --data $DATA_DIR --kimg 400 --resume $PKL_NAME_TO_RESUME
Examples
Pretraining:
CUDA_VISIBLE_DEVICES=0 python train.py --outdir=Primitives-PS-Pretraining --data=./data/noise.zip --gpus=1
Finetuning:
CUDA_VISIBLE_DEVICES=0 python train.py --outdir=Primitives-PS-to-Obama --gpus=1 --data ../data/obama.zip --kimg 400 --resume Primitives-PS
Pretrained Model
Download OneDrive
- Links of google drive are deprecated.
PinkNoise | Primitives | Primitives-S | Primitives-PS |
Obama | Grumpy Cat | Panda | Bridge of Sigh |
Medici fountain | Temple of heaven | Wuzhen | Buildings |
Synthetic Datasets
Results
Generating images from the same latent vector
GIF
Because of the limitation on the file size, the model dose not fully converge (total 400K but .gif contains 120K iterations).
Low-shot generation
CIFAR
Note
This repository is built upon DiffAug.
The short version of this paper is appeared in Neurips-22-SyntheticData4ML workshop.
Citation
If you find this work useful for your research, please cite our paper:
@InProceedings{Baek_2022_CVPR,
author = {Baek, Kyungjune and Shim, Hyunjung},
title = {Commonality in Natural Images Rescues GANs: Pretraining GANs With Generic and Privacy-Free Synthetic Data},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {7854-7864}
}