Awesome
The Neural Testbed
Introduction
Posterior predictive distributions quantify uncertainties ignored by point estimates.
The neural_testbed
provides tools for the systematic evaluation of agents that generate such predictions.
Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs.
Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community.
This library automates the evaluation and analysis of learning agents:
- Synthetic neural-network-based generative model.
- Evaluate predictions beyond marginal distributions.
- Reference implementations of benchmark agents (with tuning).
For a more comprehensive overview, see the accompanying paper.
Technical overview
We outline the key high-level interfaces for our code in base.py:
EpistemicSampler
: Generates a random sample from agent's predictive distribution.TestbedAgent
: Given data, prior_knowledge outputs an EpistemicSampler.TestbedProblem
: Reveals training_data, prior_knowledge. Can evaluate the quality of an EpistemicSampler.
If you want to evaluate your algorithm on the testbed, you simply need to define your TestbedAgent
and then run it on our experiment.py
def run(agent: testbed_base.TestbedAgent,
problem: testbed_base.TestbedProblem) -> testbed_base.ENNQuality:
"""Run an agent on a given testbed problem."""
enn_sampler = agent(problem.train_data, problem.prior_knowledge)
return problem.evaluate_quality(enn_sampler)
The neural_testbed
takes care of the evaluation/logging within the TestbedProblem
.
This means that the experiment will automatically output data in the correct format.
This makes it easy to compare results from different codebases/frameworks, so you can focus on agent design.
How do I get started?
If you are new to neural_testbed
you can get started in our colab tutorial.
This Jupyter notebook is hosted with a free cloud server, so you can start coding right away without installing anything on your machine.
After this, you can follow the instructions below to get neural_testbed
running on your local machine:
Installation
We have tested neural_testbed
on Python 3.7. To install the dependencies:
-
Optional: We recommend using a Python virtual environment to manage your dependencies, so as not to clobber your system installation:
python3 -m venv neural_testbed source neural_testbed/bin/activate pip install --upgrade pip setuptools
-
Install
neural_testbed
directly from github:git clone https://github.com/deepmind/neural_testbed.git cd neural_testbed pip install .
-
Optional: run the tests by executing
./test.sh
from theneural_testbed
main directory.
Baseline agents
In addition to our testbed code, we release a collection of benchmark agents.
These include the full sets of hyperparameter sweeps necessary to reproduce the paper's results, and can serve as a great starting point for new research.
You can have a look at these implementations in the agents/factories/
folder.
We recommended you get started with our colab tutorial.
After intallation you can also run an agent directly by executing the following command from the main directory of neural_testbed
:
python -m neural_testbed.experiments.run --agent_name=mlp
By default, this will save the results for that agent to csv at /tmp/neural_testbed
.
You can control these options by flags in the run file.
In particular, you can run the agent on the whole sweep of tasks in the Neural Testbed by specifying the flag --problem_id=SWEEP
.
Citing
If you use neural_testbed
in your work, please cite the accompanying paper:
@article{osband2022neural,
title={The Neural Testbed: Evaluating Joint Predictions},
author={Osband, Ian and Wen, Zheng and Asghari, Seyed Mohammad and Dwaracherla, Vikranth and Hao, Botao and Ibrahimi, Morteza and Lawson, Dieterich and Lu, Xiuyuan and O'Donoghue, Brendan and Van Roy, Benjamin},
journal={arXiv preprint arXiv:2110.04629},
year={2022}
}