Home

Awesome

FRCL

Functional Regularisation for Continual Learning with Gaussian Processes

by Pavel Andreev, Peter Mokrov and Alexander Kagan

This is an unofficial PyTorch implementation of the paper https://arxiv.org/abs/1901.11356 . The main goal of this project is to provide an independent reproduction of the results presented in the paper.

Project Proposal: pdf

Experiments launching

To launch our experiments use results_script.py The example of script run below:

> python .\results_script.py --device 'your device' --task 'permuted_mnist' --method 'baseline' --n_inducing 2

Available options for --task argument are split_mnist, permuted_mnist and omniglot. Available options for --method argument are baseline, frcl_random and frcl_trace.

Results of our experiments are presented in '.\results'. Besides, one can find notebooks with minimal working examples in '.\notebooks'.

Results

The presentation with the project main results is available here.

We results are also summarized in the table below.

DatsetMethodN pointsCriteriaAccuracy (ours)Accuracy (paper)
Split-MNISTbaseline2-0.981-
Split-MNISTbaseline40-0.9850.958
Split-MNISTFRCL2Random0.8270.598
Split-MNISTFRCL2Trace0.820.82
Split-MNISTFRCL40Random0.9860.971
Split-MNISTFRCL40Trace0.9790.978
Permuted-MNISTbaseline10-0.6950.486
Permuted-MNISTbaseline80-0.865-
Permuted-MNISTbaseline200-0.9080.823
Permuted-MNISTFRCL10Random0.628/0.527*0.482
Permuted-MNISTFRCL80Random0.838-
Permuted-MNISTFRCL200Random0.9420.943
Omniglot-10baseline60-0.381-
Omniglot-10FRCL60Random0.376-

Results of our experiments are presented in .\results

* the results appeared to significantly depend on initialization of parameters