Awesome
GPCM
Implementation of the GPCM and variations
Citation:
@inproceedings{Bruinsma:2022:Modelling_Non-Smooth_Signals_With_Complex,
title = {Modelling Non-Smooth Signals With Complex Spectral Structure},
year = {2022},
author = {Wessel P. Bruinsma and Martin Tegn{\' e}r and Richard E. Turner},
booktitle = {Proceedings of the 25th International Conference on Artificial Intelligence and Statistics},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
eprint = {https://arxiv.org/abs/2203.06997},
}
Contents:
- Installation
- Example
- Available Models and Inference Schemes
- Making Predictions
- Sample Experiments
- Reproduce Experiments From the Paper
Installation
See the instructions here. Then simply
pip install gpcm
If you have a GPU available, it is recommended that you use a GPU-accelerated version of JAX.
Example
import numpy as np
from gpcm import RGPCM
model = RGPCM(window=2, scale=1, n_u=30, t=(0, 10))
# Sample from the prior.
t = np.linspace(0, 10, 100)
K, y = model.sample(t) # Sampled kernel matrix and sampled noisy function
# Fit model to the sample.
model.fit(t, y)
# Compute the ELBO.
print(model.elbo(t, y))
# Make predictions.
posterior = model.condition(t, y)
mean, var = posterior.predict(t)
Available Models and Approximation Schemes
The following models are available:
Model | Description |
---|---|
GPCM | White noise excitation with a smooth filter |
CGPCM | White noise excitation with a smooth causal filter |
RGPCM | Ornstein-Uhlenbeck excitation with a white noise filter |
The simplest way of constructing a model is to set the following keywords:
Keyword | Description |
---|---|
window | Largest length scale of signal |
scale | Smallest length scale of signal |
t | Some iterable containing the limits of the inputs of interest |
Example:
from gpcm import RGPCM
model = RGPCM(window=4, scale=0.5, t=(0, 10))
Please see the API for a detailed description of the keyword arguments which you can set. Amongst these keyword arguments, we highlight the following few which are important:
Optional Keyword | Description |
---|---|
n_u | Number of inducing points for the filter |
n_z (GPCM and CGPCM ) | Number of inducing points for the excitation signal |
m_max (RGPCM ) | Half of the number of variational Fourier features. Set to n_z // 2 for equal computational expense. |
t | Some iterable containing the limits of the inputs of interest |
The constructors of these models also take in a keyword scheme
, which can be
set to one of the following values:
scheme | Description |
---|---|
"structured" (default) | Structured approximation. Recommended. |
"mean-field-ca" | Mean-field approximation learned by coordinate ascent. This does not learn hyperparameters. |
"mean-field-gradient" | Mean-field approximation learned by gradient-based optimisation |
"mean-field-collapsed-gradient" | Collapsed mean-field approximation learned by gradient-based optimisation |
Example:
from gpcm import RGPCM
model = RGPCM(scheme="mean-field-ca", window=4, scale=0.5, t=(0, 10))
Making Predictions With a Model
The implemented models follow the interface from ProbMods.
To begin with, construct a model:
from gpcm import GPCM
model = GPCM(window=4, scale=0.5, t=(0, 10))
Sample From the Paper
Sampling gives back the sampled kernel matrix and the noisy outputs.
K, y = model.sample(t)
Fit the Model to Data
It is recommended that you normalise the data before fitting. It is also recommended that you do not fit the more to more than 1000 data points.
model.fit(t, y)
The function fit
takes in the keyword argument iters
.
The rule of thumb which you can use is as follows:
iters | Description |
---|---|
5_000 (default) | Reasonable fit |
10_000 | Better fit |
20_000 | Good fit |
30_000 | Pretty good fit |
The function fit
also takes in the keyword argument rate
.
The rule of thumb which you can use here is as follows:
rate | Description |
---|---|
5e-2 (default) | Fast learning |
2e-2 | Slower, but more stable learning |
5e-3 | Slow learning |
Compute the ELBO
It is recommended that you normalise the data before computing the ELBO. It is also recommended that you do not compute the ELBO for more than 1000 data points.
elbo = model.elbo(t, y)
Condition the Model on Data
It is recommended that you normalise the data before conditioning and unnormalise the predictions. It is also recommended that you do not condition on more than 1000 data points.
posterior_model = model.condition(t, y)
Make Predictions
Predictions for new inputs:
mean, var = posterior_model.predict(t_new)
Predictions for the kernel:
pred = posterior_model.predict_kernel()
x, mean, var = pred.x, pred.mean, pred.var
Predictions for the PSD:
pred = posterior_model.predict_psd()
x, mean, var = pred.x, pred.mean, pred.var
Sample Experiments
Learn a GP With a Known Kernel
python experiments/eq.py
python experiments/smk.py
Learn the Mauna Loa CO2 Data Set
python experiments/mauna_loa.py
Reproduce Experiments From the Paper
See here.