Awesome
Notes on Deep Learning
New Repo of Deep Learning Papers! :star2: :boom:
I moved my collection of deep learning and machine learning papers from DropBox to this git repository! First blog post being planned is on "Boltzmann Machines, Statistical Mechanicsgit and Maximum Likelihood Estimation".
LINK: GitHub/episodeyang/deep_learning_papers_TLDR
from the Author
These are the notes that I left working through Nielsen's neural Net and Deep Learning book. You can find a table of contents of this repo below.
Table of Contents
Chapter 1: Intro to Deep Learning
- 001 - sigmoid function
- 002 - training a single perceptron
- 003 - use perceptrons to target arbitrary function
- 004 - optimize batch training
Chapter 2: Intro to Tensorflow
Chapter 3: Advanced Tensorflow with GPU AWS Instance and PyCharm Remote Interpreter.
- MNIST Logistic Regression
- MNIST Logistic Regression with L2 Regularization
- MNIST 1 Hidden Layer with Perceptron
Chapter 4: Recurrent Networks.
Here I implemented a vanilla RNN from scratch. I didn't want to write the partial derivatives
by hand, but Tensorflow
feels a bit too opaque. The edf
framework by TTIC is a poor-man's
Tensorflow
, and it provides auto-differentiation via component.backward()
method. So I
decided to go with it.
I also implemented RMSProp and Adam by hand, and tried hyper-parameter search. It was extremely informative.
- Multi-Layer Perceptron Intro to
edf
framework - Implementing Optimization Algorithms and Hyper Parameter Search
- Vanilla Recurrent Neural Networks
- Long Short Term Memory (LSTM)
- Gated Recurrent Unit (GRU)
Project: Doing Particle Simulation with Tensorflow
Project: LeNet with Novel Loss Function
Fun Highlights (Reverse Chronological Order)
some of the figures can be found scattered in the folder (I believe in a flat folder structure).
Particle Simulation with Tensorflow! (classical many body simulation for my quantum computing research)
It turned out that not needing to write the Jacobian of your equations of motion is a huge time saver in doing particle simulations.
Here is a 2D classical many body simulator I wrote for my quantum computing research. In my lab, I am building a new type of qubits by traping single electrons on the surface of super fluild helium. You can read more about our progress in this paper from PRX.
In this new experiment, we want to construct a very small electro-static trap so that we can couple a microwave mirror to the dipole of a single electron. To understand where electrons are likely to go, I need to build a simple electro-static simulation.
<p align="center"> <img width="300px" height="300px" alt="Electron Configuration During Simulation" src="Proj_Molecular_Simulation/figures/Electron%20Configuration%20Animated%20(WIP)%20small.gif"/> </p>Projecting MNIST into a 2-Dimensional Deep Feature Space
It turned out that you can constrict the feature space of a convolutional neural network, and project the MNIST dataset onto a 2-dimensional plane!
This is my attempt at reproducing the work from Yandong Wei's paper (link see project readme (WIP)).
<p align="center"> <img width="348.8px" height="280.4px" src="Proj_Centroid_Loss_LeNet/LeNet_plus/figures/MNIST%20LeNet++%20with%202%20Deep%20Features%20(PReLU).png"/> </p>This makes very nice visualizations. Curious about how this embedding evolves during training, I made a few movies. You can find them inside the project folder.
<p align="center"> <img alt="network learning" src="Proj_Centroid_Loss_LeNet/LeNet_plus_centerloss/figures/animation/MNIST_LeNet_centroid_loss_lambda_0.001.gif"/> </p>MNIST ConvNet with TensorFlow
My first attempt at building a convolutional neural network with tensorflow.
This example does:
- uses different GPUs for training and evaluation (manual device placement)
- persist network parameters in check files (session saving and restore)
- pushes loss and accuracy to summary, which can be visualized by tensorboard (summary and tensorboard)
A simple toy example
This one below shows how a simple network can be trained to emulate a given target function. Implemented with numpy without the help of tensorflow.
Todos (02/07/2017):
- Wormhole RNN [pdf]
- Experiment with PyTorch
- Proj RNN: Teach RNN how to do math
- Proj NLP: syntax highlighter for natural language
- Restricted Boltzman Machine, and how it is used in deep belief to initialize auto-encoders [Hinton, 2006]
- binary weight networks
XNOR net
- Attention Networks: link: Augmented RNN
- Image Captioning
- Adversarial Hardened LeNet++ [1.0]
- Adversarial Test of Hardened LeNet++ [1.0]
- L2 Regularization with Logistic Regression [1.0]
Bucket List and Things Ran-into During Readings (not in order)
- Denoising Autoencoder
- Word2vec
Done:
- work on optimize batch training. (numpy neural net)
- add summary MNIST example with Tensorflow
- Convolutional Neural Network
- multi-GPU setup tensorflow doc [0.5 - 1.0]
- CFAR Example [4.0]
- Save and restore net
- MNIST Perceptron logging and visualization with tensorboard
- Feedforward Neural Network (Multilayer Perceptron) tensorboard doc [2.0]
- TensorBoard
- LeNet training ConvNet doc [1.0]
- LeNet++ training [1.0]
- Deep Feedforward Neural Network (Multilayer Perceptron with 2 Hidden Layers O.o)
- Vanilla Recurrent Neural Network
- regularization and batch normalization
- LSTM with
edf
- GRU with
edf
More Useful Links:
- Useful examples: @Aymericdamien's TensorFlow-Example
- More useful examples: @nlintz's TensorFlow-Tutorials