Home

Awesome

Compositional GAN in PyTorch

This is the implementation of the Compositional GAN: Learning Image-Conditional Binary Composition. The code was written by Samaneh Azadi. Please find the paper at ArXiv or the International Journal of Computer Vision 2020.

Prerequisites:

Preparation

Installation

git clone https://github.com/pytorch/vision
cd vision
python setup.py install
pip install visdom
git clone https://github.com/azadis/CompositionalGAN
cd CompositionalGAN

Datasets

Individual chairs and tables are taken from Shapenet dataset, faces from CelebA dataset, and street scenes from Cityscapes.

Training

Viewpoint Transformation module:

If your model includes viewpoint transformation as in the chair_table experiment, train the Appearance Flow Network (AFN) by:

bash scripts/chair_table/train_AFN_Compose.sh

or download our trained AFN model:

bash scripts/chair_table/download_ckpt.sh

Paired Data

bash scripts/${obj1_obj2}/train_objCompose_paired.sh

Unpaired Data

 scripts/${obj1_obj2}/train_objCompose_unpaired.sh

Testing

bash scripts/${obj1_obj2}/test_objCompose_paired.sh

or

bash scripts/${obj1_obj2}/test_objCompose_unpaired.sh

Visualization

cd results/${obj1_obj2}_test_paired_compGAN/test_${epoch}/
python -m http.server 8884
cd results/finetune/${obj1_obj2}_test_paired_compGAN/test_${epoch}/
python -m http.server 8884

Citation

If you use this code or our compositional dataset, please cite our paper:

@article{azadi2018compositional,
  title={Compositional gan: Learning image-conditional binary composition},
  author={Azadi, Samaneh and Pathak, Deepak and Ebrahimi, Sayna and Darrell, Trevor},
  journal={arXiv preprint arXiv:1807.07560},
  year={2018}
}