Awesome
<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;"> Few-shot Image Generation via <br> Adaptation-Aware Kernel Modulation</h1> <p align='center' style="text-align:center;font-size:1.25em;"> <a href="https://scholar.google.com/citations?user=kQA0x9UAAAAJ&hl=en" target="_blank" style="text-decoration: none;">Yunqing Zhao<sup>*</sup></a> , <a href="https://keshik6.github.io/" target="_blank" style="text-decoration: none;">Keshigeyan Chandrasegaran<sup>*</sup></a> , <a href="https://miladabd.github.io/" target="_blank" style="text-decoration: none;">Milad Abdollahzadeh<sup>*</sup></a> , <a href="https://sites.google.com/site/mancheung0407/" target="_blank" style="text-decoration: none;">Ngai‑Man Cheung<sup>†</sup></a></br> </p> <p align='center' style="text-align:center;font-size:1.25em;"> Singapore University of Technology and Design<br/> </p> <p align='left';> <b> <em>NeurIPS 2022, </em> <em>Ernest N. Morial Convention Center, New Orleans, LA, USA.</em> <sup>*</sup> Equal Contribution </b> </p> <p align='left' style="text-align:left;font-size:1.3em;"> <b> [<a href="https://yunqing-me.github.io/AdAM/" target="_blank" style="text-decoration: none;">Project Page</a>] [<a href="https://neurips.cc/media/PosterPDFs/NeurIPS%202022/d0ac1ed0c5cb9ecbca3d2496ec1ad984.png" target="_blank" style="text-decoration: none;">Poster</a>] [<a href="https://drive.google.com/file/d/1hNSIlu0zhjGvqq-gG928jIICCCxuhFHz/view?usp=share_link" target="_blank" style="text-decoration: none;">Slides</a>] [<a href="https://proceedings.neurips.cc/paper_files/paper/2022/file/7b122d0a0dcb1a86ffa25ccba154652b-Paper-Conference.pdf" target="_blank" style="text-decoration: none;">Paper</a>] </b> </p>TL, DR:
In this research, we propose Adaptation-Aware Kernel Modulation (AdAM) for few-shot image generation, that aims to identify kernels in source GAN important for target adaptation.
The model can perform GAN adaptation using very few samples from target domains with different proximity to the source.
Installation and Environment:
- Platform: Linux
- Tesla V100 GPUs / (or A100 GPUs)
- PyTorch 1.7.0
- Python 3.6.9
- lmdb, tqdm
Alternatively, A suitable conda environment named adam
can be created and activated with:
git clone https://github.com/yunqing-me/AdAM.git
conda env create -f environment.yml
conda activate adam
cd AdAM
Analysis of Source ↦ Target distance
We analyze the Source ↦ Target domain relation in the Sec. 3 (and Supplementary). See below for related steps in this analysis.
Step 1. git clone https://github.com/rosinality/stylegan2-pytorch.git
Step 2. Move ./visualization
to ./stylegan2-pytorch
Step 3. Then, refer to the visualization code in ./visualization
.
Pre-processing for training
Step 1.
Prepare the few-shot training dataset using lmdb format
For example, download the 10-shot target set, Babies
(Link) and AFHQ-Cat
(Link), and organize your directory as follows:
10-shot-{babies/afhq_cat}
└── images
└── image-1.png
└── image-2.png
└── ...
└── image-10.png
Then, transform to lmdb format:
python prepare_data.py --input_path [your_data_path_of_{babies/afhq_cat}] --output_path ./_processed_train/[your_lmdb_data_path_of_{babies/afhq_cat}]
Step 2.
Prepare the entire target dataset for evaluation
For example, download the entire dataset, Babies
(Link) and AFHQ-Cat
(Link), and organize your directory as follows:
entire-{babies/afhq_cat}
└── images
└── image-1.png
└── image-2.png
└── ...
└── image-n.png
Then, transform to lmdb format for evaluation
python prepare_data.py --input_path [your_data_path_of_entire_{babies/afhq_cat}] --output_path ./_processed_test/[your_lmdb_data_path_of_entire_{babies/afhq_cat}]
Step 3.
Download the GAN model pretrained on FFHQ from here. Then, save it to ./_pretrained/style_gan_source_ffhq.pt
.
Step 4.
Randomly generate Gaussian noise input (the same dimension as input to the generator) for Importance Probing, save them to ./_noise/
:
python noise_generation.py
Experiments
Step 1. Importance Probing (IP) to indentify important kernels for target adaptation
bash _bash_importance_probing.sh
We can obtain the estimated Fisher information of modulated kernels and it will be saved in ./_output_style_gan/args.exp/checkpoints/filter_fisher_g.pt
and ./_output_style_gan/args.exp/checkpoints/filter_fisher_d.pt
Step 2. Adaptation-Aware Kernel Modulation (AdAM) for Few-shot Image Generation
# you can tune hyperparameters here
bash _bash_main_adaptation.sh
Training dynamics and evaluation results will be shown on wandb
We note that, ideally Step 1. and Step 2. can be combined together. Here, for simplicity we use two steps as demonstration.
Evaluation of Intra-LPIPS:
Use Babies and AFHQ-Cat as example: download images from here, then move the unzipped folder into ./cluster_center
, then refer to Evaluator
in AdAM_main_adaptation.py
.
Data Repository
The estimated fisher information (i.e., the output of Importance Probing) and Weights (i.e., the output of the main adaptation corresponding to Figure 4 in the main paper) can be found Here.
Train your own GAN !
We provide all 10-shot target images and models used in our main paper and Supplementary. You can also adapt to other images selected by yourself.
Source GAN:
- FFHQ
- LSUN-Church
- LSUN-Cars
- ...
Target Samples: Link
- Babies
- Sunglasses
- MetFaces
- AFHQ-Cat
- AFHQ-Dog
- AFHQ-Wild
- Sketches
- Amedeo Modigliani's Paintings
- Rafael's Paintings
- Otto Dix's Paintings
- Haunted houses
- Van Gogh houses
- Wrecked cars
- ...
Follow the experiment part in this repo and you can produce your customized results.
Bibtex
If you find this project useful in your research, please consider citing our paper:
@inproceedings{zhao2022fewshot,
title={Few-shot Image Generation via Adaptation-Aware Kernel Modulation},
author={Yunqing Zhao and Keshigeyan Chandrasegaran and Milad Abdollahzadeh and Ngai-man Cheung},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=Z5SE9PiAO4t}
}
Meanwhile, we also demonstrate a relevant research that aims to identify and Remove InCompatible Knowledge (RICK, CVPR-2023) for few-shot image generation:
@inproceedings{zhao2023exploring,
title={Exploring incompatible knowledge transfer in few-shot image generation},
author={Zhao, Yunqing and Du, Chao and Abdollahzadeh, Milad and Pang, Tianyu and Lin, Min and Yan, Shuicheng and Cheung, Ngai-Man},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={7380--7391},
year={2023}
}
Acknowledgement:
We appreciate the wonderful base implementation of StyleGAN-V2 from @rosinality. We thank @mseitzer, @Ojha and @richzhang for their implementations on FID score and intra-LPIPS.
We also thank for the useful training and evaluation tool used in this work, from @Miaoyun.