Awesome
AttnGAN
Pytorch implementation for reproducing AttnGAN results in the paper AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks.
Dependencies
python 2.7
Pytorch
In addition, please add the project folder to PYTHONPATH and pip install
the following packages:
python-dateutil
,easydict
,pandas
,torchfile
,nltk
,scikit-image
Data
-
Download metadata (text and filename):
- Download preprocessed metadata for coco and save them to
data/
- Extract caption files in ''train2014-text.zip'' and ''val2014-text.zip'' to
data/coco/text/
- [Optional] If you want to use the per-trained models, please download the dictionary, captions.pickle, otherwise it will be generated by
pretrain_DAMSM.py
.
- Download preprocessed metadata for coco and save them to
-
Download images:
- Download coco dataset and extract both
train2014
andval2014
images todata/coco/images/
- Download coco dataset and extract both
Training
-
Pre-train DAMSM models:
-
For coco dataset:
python pretrain_DAMSM.py --cfg cfg/DAMSM/coco.yml --gpu 0
-
-
Train AttnGAN models:
- For coco dataset:
python main.py --cfg cfg/coco_attn2.yml --gpu 0
- For coco dataset:
-
*.yml
files are example configuration files for training/evaluation our models.
Pretrained Model
- DAMSM for coco. Download and save it to
DAMSMencoders/
- AttnGAN for coco. Download and save it to
models/
Sampling
- Run
python main.py --cfg cfg/eval_coco.yml --gpu 1
to generate examples from captions in files listed in "./data/coco/example_filenames.txt". Results are saved toDAMSMencoders/
. - Change the
eval_*.yml
files to generate images from other pre-trained models. - Input your own sentence in "./data/coco/example_captions.txt" if you wannt to generate images from customized sentences.
Validation
- To generate images for all captions in the validation dataset, change B_VALIDATION to True in the eval_*.yml. and then run
python main.py --cfg cfg/eval_coco.yml --gpu 1
- We compute inception score for models trained on coco using improved-gan/inception_score.
Examples generated by AttnGAN [Blog]
bird example | coco example |
---|---|
Citing AttnGAN
If you find AttnGAN useful in your research, please consider citing:
@article{Tao18attngan,
author = {Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He},
title = {AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks},
Year = {2018},
booktitle = {{CVPR}}
}