Home

Awesome

pytorch-AdaIN

This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Huang+, ICCV2017]. I'm really grateful to the original implementation in Torch by the authors, which is very useful.

Results

Requirements

Please install requirements by pip install -r requirements.txt

(optional, for training)

Usage

Download models

Download decoder.pth / vgg_normalized.pth from release and put them under models/.

Test

Use --content and --style to provide the respective path to the content and style image.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/cornell.jpg --style input/style/woman_with_hat_matisse.jpg

You can also run the code on directories of content and style images using --content_dir and --style_dir. It will save every possible combination of content and styles to the output directory.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content_dir input/content --style_dir input/style

This is an example of mixing four styles by specifying --style and --style_interpolation_weights option.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/avril.jpg --style input/style/picasso_self_portrait.jpg,input/style/impronte_d_artista.jpg,input/style/trial.jpg,input/style/antimonocromatismo.jpg --style_interpolation_weights 1,1,1,1 --content_size 512 --style_size 512 --crop

Some other options:

Train

Use --content_dir and --style_dir to provide the respective directory to the content and style images.

CUDA_VISIBLE_DEVICES=<gpu_id> python train.py --content_dir <content_dir> --style_dir <style_dir>

For more details and parameters, please refer to --help option.

I share the model trained by this code as iter_1000000.pth at release.

References