Home

Awesome

GANILLA

We provide PyTorch implementation for:

GANILLA: Generative Adversarial Networks for Image to Illustration Translation.

Paper Arxiv

Updates

Dataset Stats:

Ill stats

Sample Images:

Ill images

GANILLA:

GANILLA results on the illustration dataset:

GANILLA results

Comparison with other methods:

comparison

Style transfer using Miyazaki's anime images:

GANILLA miyazaki

Ablation Experiments:

GANILLA ablation

Prerequisites

Getting Started

Downloading Datasets

Please refer to datasets.md for details.

Installation

git clone https://github.com/giddyyupp/ganilla.git
cd ganilla
pip install -r requirements.txt

GANILLA train/test

bash ./datasets/download_cyclegan_dataset.sh maps
#!./scripts/train_ganilla.sh
python train.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan --netG resnet_fpn
#!./scripts/test_cyclegan.sh
python test.py --dataroot ./datasets/maps --name maps_cyclegan --model cycle_gan --netG resnet_fpn

The test results will be saved to a html file here: ./results/maps_cyclegan/latest_test/index.html.

You can find more scripts at scripts directory.

Apply a pre-trained model (GANILLA)

Put a pretrained model under ./checkpoints/{name}_pretrained/100_net_G.pth.

bash ./datasets/download_cyclegan_dataset.sh monet2photo
python test.py --dataroot datasets/monet2photo/testB --name {name}_pretrained --model test

The option --model test is used for generating results of GANILLA only for one side. python test.py --model cycle_gan will require loading and generating results in both directions, which is sometimes unnecessary. The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.

#!./scripts/test_single.sh
python test.py --dataroot ./datasets/monet2photo/testB/ --name {your_trained_model_name} --model test

You might want to specify --netG to match the generator architecture of the trained model.

Style & Content CNN

We shared style & content CNNs in this repo. It contains train/test procedure as well as pretrained weights for both cnns.

Training/Test Tips

Best practice for training and testing your models.

Frequently Asked Questions

Before you post a new question, please first look at the above Q & A and existing GitHub issues.

Citation

If you use this code for your research, please cite our papers.

@article{hicsonmez2020ganilla,
  title={GANILLA: Generative adversarial networks for image to illustration translation},
  author={Hicsonmez, Samet and Samet, Nermin and Akbas, Emre and Duygulu, Pinar},
  journal={Image and Vision Computing},
  pages={103886},
  year={2020},
  publisher={Elsevier}
}

@inproceedings{Hicsonmez:2017:DDN:3078971.3078982,
 author = {Hicsonmez, Samet and Samet, Nermin and Sener, Fadime and Duygulu, Pinar},
 title = {DRAW: Deep Networks for Recognizing Styles of Artists Who Illustrate Children's Books},
 booktitle = {Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval},
 year = {2017}
}

Acknowledgments

Our code is heavily inspired by CycleGAN.

The numerical calculations reported in this work were fully performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).