Home

Awesome

Unsupervised Attention-guided Image-to-Image Translation

This repository contains the TensorFlow code for our NeurIPS 2018 paper “Unsupervised Attention-guided Image-to-Image Translation”. This code is based on the TensorFlow implementation of CycleGAN provided by Harry Yang. You may need to train several times as the quality of the results are sensitive to the initialization.

By leveraging attention, our architecture (shown in the figure bellow) only maps relevant areas of the image, and by doing so, further enhances the quality of image to image translation.

Our model architecture is defined as depicted below, please refer to the paper for more details: <img src='imgs/AGGANDiagram.jpg' width="900px"/>

Mapping results

Our learned attention maps

The figure bellow displays automatically learned attention maps on various translation datasets:
<img src='imgs/attentionMaps.jpg' width="900px"/>

Horse-to-Zebra image translation results:

Horse-to-Zebra:

Top row in the figure below are input images and bottom row are the mappings produced by our algorithm. <img src='imgs/HtZ.jpg' width="900px"/>

Zebra-to-Horse:

Top row in the figure below are input images and bottom row are the mappings produced by our algorithm. <img src='imgs/ZtH.jpg' width="900px"/>

Apple-to-Orange image translation results:

Apple-to-Orange:

Top row in the figure below are input images and bottom row are the mappings produced by our algorithm. <img src='imgs/AtO.jpg' width="900px"/>

Orange-to-Apple:

Top row in the figure below are input images and bottom row are the mappings produced by our algorithm. <img src='imgs/OtA.jpg' width="900px"/>

Getting Started with the code

Prepare dataset

Training

Restoring from the previous checkpoint

python main.py --to_train=2 --log_dir=./output/AGGAN/exp_01 --config_filename=./configs/exp_01.json --checkpoint_dir=./output/AGGAN/exp_01/#timestamp#

Testing