Home

Awesome

Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation (Link).

<div align=center><img src="img/framework.png" height = "50%" width = "60%"/></div>

Framework of our method. We first build a pre-trained model similar to a GAN network, upon which a differentiable mask is imposed to scale the convolutional outputs of the generator and derive a light-weight one. Then, the co-Attention of the pre-trained GAN and the outputs of the last-layer convolutions of the discriminator are distilled to stabilize the training of the light-weight model.

Tips

Any problem, free to contact the first authors (shaojieli@stu.xmu.edu.cn).

Getting Started

The code has been tested using Pytorch1.5.1 and CUDA10.2 on Ubuntu 18.04.

Please type the command

pip install -r requirements.txt

to install dependencies.

CycleGAN

Pix2Pix

Compressed Models

We provide our compressed models in the experiments.

ModelTaskMACs</br>(Compress Rate)Parameters</br>(Compress Rate)FID/mIOUDownload
CycleGANhorse2zebra3.97G(14.3×)0.42M(26.9×)FID:62.41Link
CycleGAN*horse2zebra2.41G(23.6×)0.28M(40.4×)FID:62.96Link
CyclceGANzebra2horse3.50G (16.2×)0.30M (37.7×)FID:139.3Link
CyclceGANsummer2winter3.18G (17.9×)0.24M (47.1×)FID:78.24Link
CyclceGANwinter2summer4.29G (13.2×)0.45M (25.1×)FID:70.97Link
Pix2Pixedges2shoes2.99G (6.22×)2.13M (25.5×)FID:46.95Link
Pix2Pix*edges2shoes4.30G (4.32×)0.54M (100.7×)FID:24.08Link
Pix2Pixcityscapes3.96G (4.70×)1.73M (31.4×)mIOU:40.53Link
Pix2Pix*cityscapes4.39G (4.24×)0.55M (98.9×)mIOU:41.47Link

* indicates that a generator with separable convolutions is adopted

You can use the following code to test our compression models.

python test.py 
--dataroot ./database/horse2zebra
--model cyclegan
--load_path ./result/horse2zebra.pth

Acknowledgements

Our code is developed based on pytorch-CycleGAN-and-pix2pix and GAN Compression.