Awesome
PyTorch LMU
This repository contains PyTorch implementations of the following papers:
- Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks, by Voelker AR, Kajić I, and Eliasmith C
- Parallelizing Legendre Memory Unit Training, by Chilkuri N and Eliasmith C
Performance on the psMNIST dataset is demonstrated in examples/
.
Usage
torch
, numpy
, and scipy
are the only requirements.
src/lmu.py
contains the implementations of LMUCell
, LMU
and LMUFFT
.
Examples:
-
LMU
import torch from lmu import LMU model = LMU( input_size = 1, hidden_size = 212, memory_size = 256, theta = 784 ) x = torch.rand(100, 784, 1) # [batch_size, seq_len, input_size] output, (h_n, m_n) = model(x)
-
LMUFFT
import torch from lmu import LMUFFT model = LMUFFT( input_size = 1, hidden_size = 346, memory_size = 468, seq_len = 784, theta = 784 ) x = torch.rand(100, 784, 1) # [batch_size, seq_len, input_size] output, h_n = model(x)
Running on psMNIST
- Clone this repository and open:
examples/lmu_psmnist.ipynb
, for training and evaluating an LMU model on the psMNIST datasetexamples/lmu_fft_psmnist.ipynb
, for training and evaluating an LMUFFT model on the psMNIST dataset
examples/permutation.pt
contains the permutation tensor used while creating the psMNIST data; it's included for reproducibility. Alternatively,torch.randperm(784)
can be used to test with a new permutation.
References
- Voelker, Aaron R., Ivana Kajić, and Chris Eliasmith. "Legendre memory units: Continuous-time representation in recurrent neural networks." (2019).
- Chilkuri, Narsimha, and Chris Eliasmith. "Parallelizing Legendre Memory Unit Training." (2021)
- Official Keras implementation of LMU and LMUFFT: nengo/keras-lmu
- Legendre Memory Units in NengoDL