Home

Awesome

neural-style-tf

This is a TensorFlow implementation of several techniques described in the papers:

Additionally, techniques are presented for semantic segmentation and multiple style transfer.

The Neural Style algorithm synthesizes a pastiche by separating and combining the content of one image with the style of another image using convolutional neural networks (CNN). Below is an example of transferring the artistic style of The Starry Night onto a photograph of an African lion:

<p align="center"> <img src="examples/lions/42_output.png" width="512"/> <img src="examples/lions/content_style.png" width="290"/> </p>

Transferring the style of various artworks to the same content image produces qualitatively convincing results:

<p align="center"> <img src="examples/lions/32_output.png" width="192"> <img src="examples/lions/styles/matisse_crop.jpg" width="192"/> <img src="examples/lions/33_output.png" width="192"/> <img src="examples/lions/styles/water_lilies_crop.jpg" width="192"/> <img src="examples/lions/wave_output.png" width="192"/> <img src="examples/lions/styles/wave_crop.jpg" width="192"/> <img src="examples/lions/basquiat_output.png" width="192"/> <img src="examples/lions/styles/basquiat_crop.jpg" width="192"/> <img src="examples/lions/calliefink_output.png" width="192"/> <img src="examples/lions/styles/calliefink_crop.jpg" width="192"/> <img src="examples/lions/giger_output.png" width="192"/> <img src="examples/lions/styles/giger_crop.jpg" width="192"/> </p>

Here we reproduce Figure 3 from the first paper, which renders a photograph of the Neckarfront in Tübingen, Germany in the style of 5 different iconic paintings The Shipwreck of the Minotaur, The Starry Night, Composition VII, The Scream, Seated Nude:

<p align="center"> <img src="examples/gatys_figure/tubingen.png" height="192px"> <img src="examples/gatys_figure/tubingen_shipwreck.png" height="192px"> <img src="examples/initialization/init_style.png" height="192px"> <img src="examples/gatys_figure/tubingen_picasso.png" height="192px"> <img src="examples/gatys_figure/tubingen_scream.png" height="192px"> <img src="examples/gatys_figure/tubingen_kandinsky.png" height="192px"> </p>

Content / Style Tradeoff

The relative weight of the style and content can be controlled.

Here we render with an increasing style weight applied to Red Canna:

<p align="center"> <img src="examples/style_content_tradeoff/okeffe.jpg" height="160px"> <img src="examples/style_content_tradeoff/okeffe_10.png" width="160px"> <img src="examples/style_content_tradeoff/okeffe_100.png" width="160px"> <img src="examples/style_content_tradeoff/okeffe_10000.png" width="160px"> <img src="examples/style_content_tradeoff/output_1000000.png" width="160px"> </p>

Multiple Style Images

More than one style image can be used to blend multiple artistic styles.

<p align="center"> <img src="examples/multiple_styles/tubingen_starry_scream.png" height="192px"> <img src="examples/multiple_styles/tubingen_scream_kandinsky.png" height="192px"> <img src="examples/multiple_styles/tubingen_starry_seated.png" height="192px"> <img src="examples/multiple_styles/tubingen_seated_kandinsky.png.png" height="192px"> <img src="examples/multiple_styles/tubingen_afremov_grey.png" height="192px"> <img src="examples/multiple_styles/tubingen_basquiat_nielly.png" height="192px"> </p>

Top row (left to right): The Starry Night + The Scream, The Scream + Composition VII, Seated Nude + Composition VII
Bottom row (left to right): Seated Nude + The Starry Night, Oversoul + Freshness of Cold, David Bowie + Skull

Style Interpolation

When using multiple style images, the degree of blending between the images can be controlled.

<p align="center"> <img src="image_input/taj_mahal.jpg" height="178px"> <img src="examples/style_interpolation/taj_mahal_scream_2_starry_8.png" height="178px"> <img src="examples/style_interpolation/taj_mahal_scream_8_starry_2.png" height="178px"> <img src="examples/style_interpolation/taj_mahal_afremov_grey_8_2.png" height="178px"> <img src="examples/style_interpolation/taj_mahal_afremov_grey_5_5.png" height="178px"> <img src="examples/style_interpolation/taj_mahal_afremov_grey_2_8.png" height="178px"> </p>

Top row (left to right): content image, .2 The Starry Night + .8 The Scream, .8 The Starry Night + .2 The Scream
Bottom row (left to right): .2 Oversoul + .8 Freshness of Cold, .5 Oversoul + .5 Freshness of Cold, .8 Oversoul + .2 Freshness of Cold

Transfer style but not color

The color scheme of the original image can be preserved by including the flag --original_colors. Colors are transferred using either the YUV, YCrCb, CIE L*a*b*, or CIE L*u*v* color spaces.

Here we reproduce Figure 1 and Figure 2 in the third paper using luminance-only transfer:

<p align="center"> <img src="examples/original_colors/new_york.png" height="165px"> <img src="examples/original_colors/stylized.png" height="165px"> <img src="examples/original_colors/stylized_original_colors.png" height="165px"> <img src="examples/original_colors/garden.png" height="165px"> <img src="examples/original_colors/garden_starry.png" height="165px"> <img src="examples/original_colors/garden_starry_yuv.png" height="165px"> </p>

Left to right: content image, stylized image, stylized image with the original colors of the content image

Textures

The algorithm is not constrained to artistic painting styles. It can also be applied to photographic textures to create pareidolic images.

<p align="center"> <img src="examples/pareidolic/flowers_output.png" width="192px"> <img src="examples/pareidolic/styles/flowers_crop.jpg" width="192px"/> <img src="examples/pareidolic/oil_output.png" width="192px"> <img src="examples/pareidolic/styles/oil_crop.jpg" width="192px"> <img src="examples/pareidolic/dark_matter_output.png" width="192px"> <img src="examples/pareidolic/styles/dark_matter_bw.png" width="192px"> <img src="examples/pareidolic/ben_giles_output.png" width="192px"> <img src="examples/pareidolic/styles/ben_giles.png" width="192px"> </p>

Segmentation

Style can be transferred to semantic segmentations in the content image.

<p align="center"> <img src="examples/segmentation/00110.jpg" height="180px"> <img src="examples/segmentation/00110_mask.png" height="180px"> <img src="examples/segmentation/00110_output.png" height="180px"> <img src="examples/segmentation/00017.jpg" height="180px"> <img src="examples/segmentation/00017_mask.png" height="180px"> <img src="examples/segmentation/00017_output.png" height="180px"> <img src="examples/segmentation/00768.jpg" height="180px"> <img src="examples/segmentation/00768_mask.png" height="180px"> <img src="examples/segmentation/00768_output.png" height="180px"> <img src="examples/segmentation/02630.png" height="180px"> <img src="examples/segmentation/02630_mask.png" height="180px"> <img src="examples/segmentation/02630_output.png" height="180px"> </p>

Multiple styles can be transferred to the foreground and background of the content image.

<p align="center"> <img src="examples/segmentation/02390.jpg" height="180px"> <img src="examples/segmentation/basquiat.png" height="180px"> <img src="examples/segmentation/frida.png" height="180px"> <img src="examples/segmentation/02390_mask.png" height="180px"> <img src="examples/segmentation/02390_mask_inv.png" height="180px"> <img src="examples/segmentation/02390_output.png" height="180px"> <img src="examples/segmentation/02270.jpg" height="180px"> <img src="examples/segmentation/okeffe_red_canna.png" height="180px"> <img src="examples/segmentation/okeffe_iris.png" height="180px"> <img src="examples/segmentation/02270_mask_face.png" height="180px"> <img src="examples/segmentation/02270_mask_face_inv.png" height="180px"> <img src="examples/segmentation/02270_output.png" height="180px"> </p>

Left to right: content image, foreground style, background style, foreground mask, background mask, stylized image

Video

Animations can be rendered by applying the algorithm to each source frame. For the best results, the gradient descent is initialized with the previously stylized frame warped to the current frame according to the optical flow between the pair of frames. Loss functions for temporal consistency are used to penalize pixels excluding disoccluded regions and motion boundaries.

<p align="center"> <img src="examples/video/input.gif"> <img src="examples/video/opt_flow.gif"> <br> <img src="examples/video/weights.gif"> <img src="examples/video/output.gif"> </p>

Top row (left to right): source frames, ground-truth optical flow visualized
Bottom row (left to right): disoccluded regions and motion boundaries, stylized frames

Big thanks to Mike Burakoff for finding a bug in the video rendering.

Gradient Descent Initialization

The initialization of the gradient descent is controlled using --init_img_type for single images and --init_frame_type or --first_frame_type for video frames. White noise allows an arbitrary number of distinct images to be generated. Whereas, initializing with a fixed image always converges to the same output.

Here we reproduce Figure 6 from the first paper:

<p align="center"> <img src="examples/initialization/init_content.png" height="192"> <img src="examples/initialization/init_style.png" height="192"> <img src="examples/initialization/init_random_1.png" height="192"> <img src="examples/initialization/init_random_2.png" height="192"> <img src="examples/initialization/init_random_3.png" height="192"> <img src="examples/initialization/init_random_4.png" height="192"> </p>

Top row (left to right): Initialized with the content image, the style image, white noise (RNG seed 1)
Bottom row (left to right): Initialized with white noise (RNG seeds 2, 3, 4)

Layer Representations

The feature complexities and receptive field sizes increase down the CNN heirarchy.

Here we reproduce Figure 3 from the original paper:

<table align='center'> <tr align='center'> <td></td> <td>1 x 10^-5</td> <td>1 x 10^-4</td> <td>1 x 10^-3</td> <td>1 x 10^-2</td> </tr> <tr> <td>conv1_1</td> <td><img src="examples/layers/conv1_1_1e5.png" width="192"></td> <td><img src="examples/layers/conv1_1_1e4.png" width="192"></td> <td><img src="examples/layers/conv1_1_1e3.png" width="192"></td> <td><img src="examples/layers/conv1_1_1e2.png" width="192"></td> </tr> <tr> <td>conv2_1</td> <td><img src="examples/layers/conv2_1_1e5.png" width="192"></td> <td><img src="examples/layers/conv2_1_1e4.png" width="192"></td> <td><img src="examples/layers/conv2_1_1e3.png" width="192"></td> <td><img src="examples/layers/conv2_1_1e2.png" width="192"></td> </tr> <tr> <td>conv3_1</td> <td><img src="examples/layers/conv3_1_1e5.png" width="192"></td> <td><img src="examples/layers/conv3_1_1e4.png" width="192"></td> <td><img src="examples/layers/conv3_1_1e3.png" width="192"></td> <td><img src="examples/layers/conv3_1_1e2.png" width="192"></td> </tr> <tr> <td>conv4_1</td> <td><img src="examples/layers/conv4_1_1e5.png" width="192"></td> <td><img src="examples/layers/conv4_1_1e4.png" width="192"></td> <td><img src="examples/layers/conv4_1_1e3.png" width="192"></td> <td><img src="examples/layers/conv4_1_1e2.png" width="192"></td> </tr> <tr> <td>conv5_1</td> <td><img src="examples/layers/conv5_1_1e5.png" width="192"></td> <td><img src="examples/layers/conv5_1_1e4.png" width="192"></td> <td><img src="examples/layers/conv5_1_1e3.png" width="192"></td> <td><img src="examples/layers/conv5_1_1e2.png" width="192"></td> </tr> </table>

Rows: increasing subsets of CNN layers; i.e. 'conv4_1' means using 'conv1_1', 'conv2_1', 'conv3_1', 'conv4_1'.
Columns: alpha/beta ratio of the the content and style reconstruction (see Content / Style Tradeoff).

Setup

Dependencies:

Optional (but recommended) dependencies:

After installing the dependencies:

Usage

Basic Usage

Single Image

  1. Copy 1 content image to the default image content directory ./image_input
  2. Copy 1 or more style images to the default style directory ./styles
  3. Run the command:
bash stylize_image.sh <path_to_content_image> <path_to_style_image>

Example:

bash stylize_image.sh ./image_input/lion.jpg ./styles/kandinsky.jpg

Note: Supported image formats include: .png, .jpg, .ppm, .pgm

Note: Paths to images should not contain the ~ character to represent your home directory; you should instead use a relative path or the absolute path.

Video Frames

  1. Copy 1 content video to the default video content directory ./video_input
  2. Copy 1 or more style images to the default style directory ./styles
  3. Run the command:
bash stylize_video.sh <path_to_video> <path_to_style_image>

Example:

bash stylize_video.sh ./video_input/video.mp4 ./styles/kandinsky.jpg

Note: Supported video formats include: .mp4, .mov, .mkv

Advanced Usage

Single Image or Video Frames

  1. Copy content images to the default image content directory ./image_input or copy video frames to the default video content directory ./video_input
  2. Copy 1 or more style images to the default style directory ./styles
  3. Run the command with specific arguments:
python neural_style.py <arguments>

Example (Single Image):

python neural_style.py --content_img golden_gate.jpg \
                       --style_imgs starry-night.jpg \
                       --max_size 1000 \
                       --max_iterations 100 \
                       --original_colors \
                       --device /cpu:0 \
                       --verbose;

To use multiple style images, pass a space-separated list of the image names and image weights like this:

--style_imgs starry_night.jpg the_scream.jpg --style_imgs_weights 0.5 0.5

Example (Video Frames):

python neural_style.py --video \
                       --video_input_dir ./video_input/my_video_frames \
                       --style_imgs starry-night.jpg \
                       --content_weight 5 \
                       --style_weight 1000 \
                       --temporal_weight 1000 \
                       --start_frame 1 \
                       --end_frame 50 \
                       --max_size 1024 \
                       --first_frame_iterations 3000 \
                       --verbose;

Note: When using --init_frame_type prev_warp you must have previously computed the backward and forward optical flow between the frames. See ./video_input/make-opt-flow.sh and ./video_input/run-deepflow.sh

Arguments

Optimization Arguments

<p align="center"> <img src="examples/equations/plot.png" width="360px"> </p> <p align="center"> <img src="examples/equations/content.png" width="321px"> </p>

Video Frame Arguments

Questions and Errata

Send questions or issues:
<img src="examples/equations/email.png">

Memory

By default, neural-style-tf uses the NVIDIA cuDNN GPU backend for convolutions and L-BFGS for optimization. These produce better and faster results, but can consume a lot of memory. You can reduce memory usage with the following:

Implementation Details

All images were rendered on a machine with:

Acknowledgements

The implementation is based on the projects:

Source video frames were obtained from:

Artistic images were created by the modern artists:

Artistic images were created by the popular historical artists:

Bash shell scripts for testing were created by my brother Sheldon Smith.

Citation

If you find this code useful for your research, please cite:

@misc{Smith2016,
  author = {Smith, Cameron},
  title = {neural-style-tf},
  year = {2016},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/cysmith/neural-style-tf}},
}