Home

Awesome

Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration

This repository contains the code (in TensorFlow) for the paper:

Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration <br> Lu Sheng, Ziyi Lin, Jing Shao, Xiaogang Wang <br> CVPR 2018

Overview

In this repository, we propose an efficient and effective Avatar-Net that enables visually plausible multi-scale transfer for arbitrary style in real-time. The key ingredient is a style decorator that makes up the content features by semantically aligned style features, which does not only holistically match their feature distributions but also preserve detailed style patterns in the decorated features. By embedding this module into an image reconstruction network that fuses multi-scale style abstractions, the Avatar-Net renders multi-scale stylization for any style image in one feed-forward pass.

teaser

Examples

image_results

Comparison with Prior Arts

<p align='center'><img src="./docs/figures/closed_ups.png" width="500"></p>

Execution Efficiency

MethodGatys et. al.AdaINWCTStyle-SwapAvatar-Net
256x256 (sec)12.180.0530.620.0640.071
512x512 (sec)43.250.110.930.230.28

Dependencies

Download

Usage

Basic Usage

Simply use the bash file ./scripts/evaluate_style_transfer.sh to apply Avatar-Net to all content images in CONTENT_DIR from any style image in STYLE_DIR. For example,

bash ./scripts/evaluate_style_transfer.sh gpu_id CONTENT_DIR STYLE_DIR EVAL_DIR 

More detailed evaluation options can be found in evaluate_style_transfer.py, such as

python evaluate_style_transfer.py

Configuration

The detailed configuration of Avatar-Net is listed in configs/AvatarNet.yml, including the training specifications and network hyper-parameters. The style decorator has three options:

The style transfer is actually performed in AvatarNet.transfer_styles(self, inputs, styles, inter_weight, intra_weights), in which

Users may modify the evaluation script for multiple style interpolation or content-style trade-off.

Training

  1. Download MSCOCO datasets and transfer the raw images into tfexamples, according to the python script ./datasets/convert_mscoco_to_tfexamples.py.
  2. Use bash ./scripts/train_image_reconstruction.sh gpu_id DATASET_DIR MODEL_DIR to start training with default hyper-parameters. gpu_id is the mounted GPU for the applied Tensorflow session. Replace DATASET_DIR with the path to MSCOCO training images and MODEL_DIR to Avatar-Net model directory.

Citation

If you find this code useful for your research, please cite the paper:

Lu Sheng, Ziyi Lin, Jing Shao and Xiaogang Wang, "Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration", in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. [Arxiv]

@inproceedings{sheng2018avatar,
    Title = {Avatar-Net: Multi-scale Zero-shot Style Transfer by Feature Decoration},
    author = {Sheng, Lu and Lin, Ziyi and Shao, Jing and Wang, Xiaogang},
    Booktitle = {Computer Vision and Pattern Recognition (CVPR), 2018 IEEE Conference on},
    pages={1--9},
    year={2018}
}

Acknowledgement

This project is inspired by many style-agnostic style transfer methods, including AdaIN, WCT and Style-Swap, both from their papers and codes.

Contact

If you have any questions or suggestions about this paper, feel free to contact me (lsheng@ee.cuhk.edu.hk)