Home

Awesome

SingleGAN

Pytorch implementation of our paper: "SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning".

By leveraging multiple adversarial learning, our model can perform multi-domain and multi-modal image translation with a single generator.

<p align="center"> <img src='images/base_model.jpg' align="center" width='90%'> </p> <p align="center"> <img src='images/extended_models.jpg' align="center" width='90%'> </p>

Dependencies

you can install all the dependencies by

pip install -r requirements.txt

Getting Started

Datasets

Training

Testing

In recent experiments, we found that spectral normaliation (SN) can help stabilize the training stage. So we add SN in this implementation. You may need to update your pytorch to 0.4.1 to support SN or use an old version without SN.

Results

Unsupervised cross-domain translation:

<p align="center"><img src='images/base.jpg' align="center" width='100%'<p>

Unsupervised one-to-many translation:

<p align="center"><img src='images/one2many.jpg' align="center" width='90%'<p>

Unsupervised many-to-many translation:

<p align="center"><img src='images/many2many.jpg' align="center" width='60%'<p>

Unsupervised multimodal translation:

Cat ↔ Dog:

<p align="center"> <img src='images/cat.jpg' width='18%' /><img src='images/cat2dog.gif' width='18%' /> <img src='images/dog.jpg' width='18%'/><img src='images/dog2cat.gif' width='18%'/> </p> Label ↔ Facade: <p align="center"> <img src='images/label.jpg' width='18%' /><img src='images/label2facade.gif' width='18%' /> <img src='images/facade.jpg' width='18%'/><img src='images/facade2label.gif' width='18%'/> </p> Edge ↔ Shoes: <p align="center"> <img src='images/edge.jpg' width='18%' /><img src='images/edge2shoe.gif' width='18%' /> <img src='images/shoe.jpg' width='18%'/><img src='images/shoe2edge.gif' width='18%'/> </p>

Please note that this repository contains only the unsupervised version of SingleGAN, you can implement the supervised version by overloading the data loader and replacing the cycle consistency loss with reconstruction loss. See more details in our paper.

bibtex

If this work is useful for your research, please consider citing :

@inproceedings{yu2018singlegan,    
	title={SingleGAN: Image-to-Image Translation by a Single-Generator Network using Multiple Generative Adversarial Learning},    
	author={Yu, Xiaoming and Cai, Xing and Ying, Zhenqiang and Li, Thomas and Li, Ge},    
	booktitle={Asian Conference on Computer Vision},    
	year={2018}
 }

Acknowledgement

The code used in this research is inspired by BicycleGAN.

Contact

Feel free to reach me if there is any questions (Xiaoming-Yu@pku.edu.cn).