Home

Awesome

Changelog

v1.0.x

fauxtograph

This package contains classes for training three different unsupervised, generative image models. Namely Variational Auto-encoders, Generative Adversarial Networks, and the newly developed combination of the two (VAE/GAN). Descriptions of the inner workings of these algorithms can be found in

  1. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
  2. Radford, Alec et. al.; "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" arXiv preprint arxiv:1511.06434 (2015).
  3. Boesen Lindbo Larsen, Anders et. al.; "Autoencoding Beyond Pixels Using a Learned Similarity Metric" arXiv preprint arxiv:1512.09300 (2015).

respectively.

All models take in a series of images and can be trained to perform either an encoding transform step or a generative inverse_transform step (or both). It's built on top of the Chainer framework and has an easy to use command line interface for training and generating images with a Variational Auto-encoder.

Both the module itself as well as the training script are available by installing this package through PyPI. Otherwise the module itself containing the main class which does all the heavy lifting is in fauxtograph/fauxtograph.py which has dependencies in fauxtograph/vaegan.py, while the training/generation CLI script is in fauxtograph/fauxto.py

To learn more about the command line tool functionality and to get a better sense of how one might use it, please see the blog post on the Stitch Fix tech blog, multithreaded.

##Installation

The simplest step to using the module is to install via pip:

$ pip install fauxtograph

this should additionally grab all necessary dependencies including the main backend NN framework, Chainer. However, if you plan on using CUDA to train the model with a GPU you'll need to additionally install the Chainer CUDA dependencies with

$ pip install chainer-cuda-deps

##Usage

To get started, you can either find your own image set to use or use the downloading tool to grab some of the Hubble/ESA space images, which I've found make for interesting results.

To grab the images and place them in an images folder run

$ fauxtograph download ./images

This process can take some time depending on your internet connection.

Then you can train a model and output it to disk with

$ fauxtograph train --kl_ratio 0.005 ./images ./models/model_name 

Finally, you can generate new images based on your trained model with

$ fauxtograph generate ./models/model_name_model.h5 ./models/model_name_opt.h5 ./models/model_name_meta.json ./generated_images_folder

Each command comes with a --help option to see possible optional arguments.

Tips

Using the CLI

Generally

ENJOY