Home

Awesome

Introduction

This is the source code of our IEEE TCSVT 2019 paper "Bridge-GAN: Interpretable Representation Learning for Text-to-image Synthesis". Please cite the following paper if you use our code.

Mingkuan Yuan and Yuxin Peng, "Bridge-GAN: Interpretable Representation Learning for Text-to-image Synthesis", IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), DOI:10.1109/TCSVT.2019.2953753, Nov. 2019. [pdf]

Training Environment

CUDA 9.0

Python 3.6.8

TensorFlow 1.10.0

Preparation

Download the preprocessed char-CNN-RNN text embeddings and filename lists for birds, which should be saved in data/cub/

Download the birds image data and extract them to data/cub/images/

Download the Inception score model to evaluation/models/ for evaluating the trained model

Run the following command:

- sh data_preprocess.sh

Training

- run 'sh train_all.sh' to train the model

Trained Model

Download our trained model to code/results/00000-bgan-cub-cond-2gpu/ for evaluation

Inception Score Environment

CUDA 8.0

Python 2.7.12

TensorFlow 1.2.1

Evaluation

- run 'sh test_all.sh' to evaluate the final inception score

Our Related Work

If you are interested in text-to-image synthesis, you can check our recently published papers about it:

Mingkuan Yuan and Yuxin Peng, "CKD: Cross-task Knowledge Distillation for Text-to-image Synthesis", IEEE Transactions on Multimedia (TMM), DOI:10.1109/TMM.2019.2951463, Nov. 2019. [pdf]

Mingkuan Yuan and Yuxin Peng, "Text-to-image Synthesis via Symmetrical Distillation Networks", 26th ACM Multimedia Conference (ACM MM), pp. 1407-1415, Seoul, Korea, Oct. 22-26, 2018. [pdf]

Welcome to our Laboratory Homepage for more information about our papers, source codes, and datasets.

Acknowledgement

Our project borrows some source files from StyleGAN. We thank the authors.