Awesome
Artifical neural networks, anytime
This is a code repository to efficiently train a deconvolutional neural network with rectified linear units.
This code uses theano, pylearn2, cuda-convnet and is heavily based on Sander Dieleman's kaggle galaxy repo.
It also currently relies on a change to pylearn2.sandbox.cuda_convnet.pool.py
that defines a grad
method for the MaxPoolGrad
class, which can be useful in some cases...
ICLR 2015 Paper Repo
If you came here via our paper An Analysis of Unsupervised Pre-training in Light of Recent Advances, please go here to access the experiments we ran.
Layout of the code
There are currently 3 main modules:
- datasets - generic dataset classes
- layers - layer definitions
- util - utilities for training, evaluating, and saving/loading checkpoints