Home

Awesome

Differentiable Neural Computers and family, for Pytorch

Includes:

  1. Differentiable Neural Computers (DNC)
  2. Sparse Access Memory (SAM)
  3. Sparse Differentiable Neural Computers (SDNC)
<!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> <!-- END doctoc generated TOC please keep comment here to allow auto update -->

Build Status PyPI version

This is an implementation of Differentiable Neural Computers, described in the paper Hybrid computing using a neural network with dynamic external memory, Graves et al. and Sparse DNCs (SDNCs) and Sparse Access Memory (SAM) described in Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes.

Install

pip install dnc

From source

git clone https://github.com/ixaxaar/pytorch-dnc
cd pytorch-dnc
pip install -r ./requirements.txt
pip install -e .

For using fully GPU based SDNCs or SAMs, install FAISS:

conda install faiss-gpu -c pytorch

pytest is required to run the test

Architecure

<img src="./docs/dnc.png" height="600" />

Usage

DNC

Constructor Parameters:

Following are the constructor parameters:

Following are the constructor parameters:

ArgumentDefaultDescription
input_sizeNoneSize of the input vectors
hidden_sizeNoneSize of hidden units
rnn_type'lstm'Type of recurrent cells used in the controller
num_layers1Number of layers of recurrent units in the controller
num_hidden_layers2Number of hidden layers per layer of the controller
biasTrueBias
batch_firstTrueWhether data is fed batch first
dropout0Dropout between layers in the controller
bidirectionalFalseIf the controller is bidirectional (Not yet implemented
nr_cells5Number of memory cells
read_heads2Number of read heads
cell_size10Size of each memory cell
nonlinearity'tanh'If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id-1ID of the GPU, -1 for CPU
independent_linearsFalseWhether to use independent linear units to derive interface vector
share_memoryTrueWhether to share memory between controller layers

Following are the forward pass parameters:

ArgumentDefaultDescription
input-The input vector (B*T*X) or (T*B*X)
hidden(None,None,None)Hidden states (controller hidden, memory hidden, read vectors)
reset_experienceFalseWhether to reset memory
pass_through_memoryTrueWhether to pass through memory

Example usage

from dnc import DNC

rnn = DNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import DNC

rnn = DNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

KeyY axis (dimensions)X axis (dimensions)
debug_memory['memory']layer * timenr_cells * cell_size
debug_memory['link_matrix']layer * timenr_cells * nr_cells
debug_memory['precedence']layer * timenr_cells
debug_memory['read_weights']layer * timeread_heads * nr_cells
debug_memory['write_weights']layer * timenr_cells
debug_memory['usage_vector']layer * timenr_cells

SDNC

Constructor Parameters:

Following are the constructor parameters:

ArgumentDefaultDescription
input_sizeNoneSize of the input vectors
hidden_sizeNoneSize of hidden units
rnn_type'lstm'Type of recurrent cells used in the controller
num_layers1Number of layers of recurrent units in the controller
num_hidden_layers2Number of hidden layers per layer of the controller
biasTrueBias
batch_firstTrueWhether data is fed batch first
dropout0Dropout between layers in the controller
bidirectionalFalseIf the controller is bidirectional (Not yet implemented
nr_cells5000Number of memory cells
read_heads4Number of read heads
sparse_reads4Number of sparse memory reads per read head
temporal_reads4Number of temporal reads
cell_size10Size of each memory cell
nonlinearity'tanh'If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id-1ID of the GPU, -1 for CPU
independent_linearsFalseWhether to use independent linear units to derive interface vector
share_memoryTrueWhether to share memory between controller layers

Following are the forward pass parameters:

ArgumentDefaultDescription
input-The input vector (B*T*X) or (T*B*X)
hidden(None,None,None)Hidden states (controller hidden, memory hidden, read vectors)
reset_experienceFalseWhether to reset memory
pass_through_memoryTrueWhether to pass through memory

Example usage

from dnc import SDNC

rnn = SDNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  sparse_reads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import SDNC

rnn = SDNC(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  sparse_reads=4,
  temporal_reads=4,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

KeyY axis (dimensions)X axis (dimensions)
debug_memory['memory']layer * timenr_cells * cell_size
debug_memory['visible_memory']layer * timesparse_reads+2*temporal_reads+1 * nr_cells
debug_memory['read_positions']layer * timesparse_reads+2*temporal_reads+1
debug_memory['link_matrix']layer * timesparse_reads+2temporal_reads+1 * sparse_reads+2temporal_reads+1
debug_memory['rev_link_matrix']layer * timesparse_reads+2temporal_reads+1 * sparse_reads+2temporal_reads+1
debug_memory['precedence']layer * timenr_cells
debug_memory['read_weights']layer * timeread_heads * nr_cells
debug_memory['write_weights']layer * timenr_cells
debug_memory['usage']layer * timenr_cells

SAM

Constructor Parameters:

Following are the constructor parameters:

ArgumentDefaultDescription
input_sizeNoneSize of the input vectors
hidden_sizeNoneSize of hidden units
rnn_type'lstm'Type of recurrent cells used in the controller
num_layers1Number of layers of recurrent units in the controller
num_hidden_layers2Number of hidden layers per layer of the controller
biasTrueBias
batch_firstTrueWhether data is fed batch first
dropout0Dropout between layers in the controller
bidirectionalFalseIf the controller is bidirectional (Not yet implemented
nr_cells5000Number of memory cells
read_heads4Number of read heads
sparse_reads4Number of sparse memory reads per read head
cell_size10Size of each memory cell
nonlinearity'tanh'If using 'rnn' as rnn_type, non-linearity of the RNNs
gpu_id-1ID of the GPU, -1 for CPU
independent_linearsFalseWhether to use independent linear units to derive interface vector
share_memoryTrueWhether to share memory between controller layers

Following are the forward pass parameters:

ArgumentDefaultDescription
input-The input vector (B*T*X) or (T*B*X)
hidden(None,None,None)Hidden states (controller hidden, memory hidden, read vectors)
reset_experienceFalseWhether to reset memory
pass_through_memoryTrueWhether to pass through memory

Example usage

from dnc import SAM

rnn = SAM(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  sparse_reads=4,
  batch_first=True,
  gpu_id=0
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors) = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Debugging

The debug option causes the network to return its memory hidden vectors (numpy ndarrays) for the first batch each forward step. These vectors can be analyzed or visualized, using visdom for example.

from dnc import SAM

rnn = SAM(
  input_size=64,
  hidden_size=128,
  rnn_type='lstm',
  num_layers=4,
  nr_cells=100,
  cell_size=32,
  read_heads=4,
  batch_first=True,
  sparse_reads=4,
  gpu_id=0,
  debug=True
)

(controller_hidden, memory, read_vectors) = (None, None, None)

output, (controller_hidden, memory, read_vectors), debug_memory = \
  rnn(torch.randn(10, 4, 64), (controller_hidden, memory, read_vectors), reset_experience=True)

Memory vectors returned by forward pass (np.ndarray):

KeyY axis (dimensions)X axis (dimensions)
debug_memory['memory']layer * timenr_cells * cell_size
debug_memory['visible_memory']layer * timesparse_reads+2*temporal_reads+1 * nr_cells
debug_memory['read_positions']layer * timesparse_reads+2*temporal_reads+1
debug_memory['read_weights']layer * timeread_heads * nr_cells
debug_memory['write_weights']layer * timenr_cells
debug_memory['usage']layer * timenr_cells

Tasks

Copy task (with curriculum and generalization)

The copy task, as descibed in the original paper, is included in the repo.

From the project root:

python ./tasks/copy_task.py -cuda 0 -optim rmsprop -batch_size 32 -mem_slot 64 # (like original implementation)

python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 32 -batch_size 1000 -optim adam -sequence_max_length 8 # (faster convergence)

For SDNCs:
python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -memory_type sdnc -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 100 -mem_size 10  -read_heads 1 -sparse_reads 10 -batch_size 20 -optim adam -sequence_max_length 10

and for curriculum learning for SDNCs:
python ./tasks/copy_task.py -cuda 0 -lr 0.001 -rnn_type lstm -memory_type sdnc -nlayer 1 -nhlayer 2 -dropout 0 -mem_slot 100 -mem_size 10  -read_heads 1 -sparse_reads 4 -temporal_reads 4 -batch_size 20 -optim adam -sequence_max_length 4 -curriculum_increment 2 -curriculum_freq 10000

For the full set of options, see:

python ./tasks/copy_task.py --help

The copy task can be used to debug memory using Visdom.

Additional step required:

pip install visdom
python -m visdom.server

Open http://localhost:8097/ on your browser, and execute the copy task:

python ./tasks/copy_task.py -cuda 0

The visdom dashboard shows memory as a heatmap for batch 0 every -summarize_freq iteration:

Visdom dashboard

Generalizing Addition task

The adding task is as described in this github pull request. This task

The task first trains the network for sentences of size ~100, and then tests if the network genetalizes for lengths ~1000.

python ./tasks/adding_task.py -cuda 0 -lr 0.0001 -rnn_type lstm -memory_type sam -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 1000 -mem_size 32 -read_heads 1 -sparse_reads 4 -batch_size 20 -optim rmsprop -input_size 3 -sequence_max_length 100

Generalizing Argmax task

The second adding task is similar to the first one, except that the network's output at the last time step is expected to be the argmax of the input.

python ./tasks/argmax_task.py -cuda 0 -lr 0.0001 -rnn_type lstm -memory_type dnc -nlayer 1 -nhlayer 1 -nhid 100 -dropout 0 -mem_slot 100 -mem_size 10 -read_heads 2 -batch_size 1 -optim rmsprop -sequence_max_length 15 -input_size 10 -iterations 10000

Code Structure

  1. DNCs:
  1. SDNCs:
  1. SAMs:
  1. Tests:

General noteworthy stuff

  1. SDNCs use the FLANN approximate nearest neigbhour library, with its python binding pyflann3 and FAISS.

FLANN can be installed either from pip (automatically as a dependency), or from source (e.g. for multithreading via OpenMP):

# install openmp first: e.g. `sudo pacman -S openmp` for Arch.
git clone git://github.com/mariusmuja/flann.git
cd flann
mkdir build
cd build
cmake ..
make -j 4
sudo make install

FAISS can be installed using:

conda install faiss-gpu -c pytorch

FAISS is much faster, has a GPU implementation and is interoperable with pytorch tensors. We try to use FAISS by default, in absence of which we fall back to FLANN.

  1. nans in the gradients are common, try with different batch sizes

Repos referred to for creation of this repo: