Awesome
Universal Style Transfer
This is the Pytorch implementation of Universal Style Transfer via Feature Transforms.
Official Torch implementation can be found here and Tensorflow implementation can be found here.
Prerequisites
- Pytorch
- torchvision
- Pretrained encoder and decoder models for image reconstruction only (download and uncompress them under models/)
- CUDA + CuDNN
Prepare images
Simply put content and image pairs in images/content
and images/style
respectively. Note that correspoding conternt and image pairs should have same names.
Style Transfer
python WCT.py --cuda
Results
<img src="images/content/in1.jpg" width="200" hspace="5"><img src="images/style/in1.jpg" width="200" hspace="5"><img src="images/content/in3.jpg" width="200" hspace="5"><img src="images/style/in3.jpg" width="200" hspace="5">
<img src="samples/in1.jpg" width="400" hspace="10"><img src="samples/in3.jpg" width="400" hspace="10">
<img src="images/content/in2.jpg" width="200" hspace="5"><img src="images/style/in2.jpg" width="200" hspace="5"><img src="images/content/in4.jpg" width="200" hspace="5"><img src="images/style/in4.jpg" width="200" hspace="5">
<img src="samples/in2.jpg" width="400" hspace="10"><img src="samples/in4.jpg" width="400" hspace="10">
Acknowledgments
Many thanks to the author Yijun Li for his kind help.
Reference
Li Y, Fang C, Yang J, et al. Universal Style Transfer via Feature Transforms[J]. arXiv preprint arXiv:1705.08086, 2017.