Home

Awesome

Liquid time-constant Networks (LTCs)

[Update] A Pytorch version together with tutorials are added to our sister repository: https://github.com/mlech26l/ncps

This is the official repository for LTC networks described in the paper: https://arxiv.org/abs/2006.04439 This repository allows you to train continuous-time models with backpropagation through-time (BPTT). Available Continuous-time models are:

ModelsReferences
Liquid time-constant Networkshttps://arxiv.org/abs/2006.04439
Neural ODEshttps://papers.nips.cc/paper/7892-neural-ordinary-differential-equations.pdf
Continuous-time RNNshttps://www.sciencedirect.com/science/article/abs/pii/S089360800580125X
Continuous-time Gated Recurrent Units (GRU)https://arxiv.org/abs/1710.04110

Requisites

All models were implemented and tested with TensorFlow 1.14.0 and python3 on Ubuntu 16.04 and 18.04 machines. All the following steps assume that they are executed under these conditions.

Preparation

First, we have to download all datasets by running

source download_datasets.sh

This script creates a folder data, where all downloaded datasets are stored.

Training and evaluating the models

There is exactly one Python module per dataset:

Each script accepts the following four arguments:

Each script trains the specified model for the given number of epochs and evaluates the validation performance after every log steps. At the end of the training, the best-performing checkpoint is restored and the model is evaluated on the test set. All results are stored in the results folder by appending the result to CSV file.

For example, we can train and evaluate the CT-RNN by executing

python3 har.py --model ctrnn

After the script is finished there should be a file results/har/ctrnn_32.csv created, containing the following columns:

Hyperparameters

ParameterValueDescription
Minibatch size16Number of training samples over which the gradient descent update is computed
Learning rate0.001/0.020.01-0.02 for LTC, 0.001 for all other models.
Hidden units32Number of hidden units of each model
OptimizerAdamSee (Kingma and Ba, 2014)
beta_10.9Parameter of the Adam method
beta_20.999Parameter of the Adam method
epsilon1e-08Epsilon-hat parameter of the Adam method
Number of epochs200Maximum number of training epochs
BPTT length32Backpropagation through time length in time-steps
ODE solver sreps1/6relative to input sampling period
Validation evaluation interval1Interval of training epochs when the metrics on the validation are evaluated

Trajectory Length Analysis

Run the main.m file to get trajectory length results for the desired setting tuneable in the code.