Home

Awesome

Recurrent-Affine-Transformation-for-Text-to-image-Synthesis

Official Pytorch implementation for our paper Recurrent-Affine-Transformation-for-Text-to-image-Synthesis

image

Examples

图片


Requirements

Note that nf=32 produces a IS around 5.0 on CUB. To reproduce the final results, please use a GPU more than 32GB.

Installation

Clone this repo.

git clone https://github.com/senmaoy/RAT-GAN.git
cd RAT-GAN/code/

Datasets Preparation

  1. Download the preprocessed metadata for birds coco and save them to data/
  2. Download the birds image data. Extract them to data/birds/.Raw text data of CUB dataset is avaiable here
  3. Download coco dataset and extract the images to data/coco/
  4. Download flower dataset and extract the images to data/flower/. Raw text data of flower dataset is avaiable here

Note that flower dataset is a bit different from cub and coco with a standalone dataset processing script.

It's easy to train on your own Datasets (similar to the processing for flower dataset)

  1. Prepare a captions.pickle containing all the image paths. Note that captions.pickle should be prepared by yourself.
  2. Save captions.pickle under data_dir.
  3. Put all the captions of an image in a standalone txt file (one caption per line). This txt file will be later read by dataset_flower.py in line 149: cap_path = '%s/%s.txt' % ('/home/yesenmao/dataset/flower/jpg_text/', filenames['img'][i])
  4. Run main.py as usual. Dataset_flower.py will automatically process your own dataset.

Pre-trained text encoder

  1. Download the pre-trained text encoder for CUB and save it to ../bird/
  2. Download the pre-trained text encoder for coco and save it to ../bird/
  3. Download the pre-trained text encoder for flower and save it to ../bird/

Training

Train RAT-GAN models:

Evaluating

Dwonload Pretrained Model

Evaluate RAT-GAN models:


Citing RAT-GAN

If you find RAT-GAN useful in your research, please consider citing our paper:

@article{ye2022recurrent,
  title={Recurrent Affine Transformation for Text-to-image Synthesis},
  author={Ye, Senmao and Liu, Fei and Tan, Minkui},
  journal={arXiv preprint arXiv:2204.10482},
  year={2022}
}

If you are interseted, join us on Wechat group where a dozen of t2i partners are waiting for you! If the QR code is expired, you can add this wechat: Unsupervised2020

1662194596425

The code is released for academic research use only. Please contact me through senmaoy@gmail.com

Reference