Home

Awesome

The Numenta Anomaly Benchmark (NAB) Build Status DOI

Welcome. This repository contains the data and scripts which comprise the Numenta Anomaly Benchmark (NAB) v1.1. NAB is a novel benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is composed of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications.

Included are the tools to allow you to run NAB on your own anomaly detection algorithms; see the NAB entry points info. Competitive results tied to open source code will be posted on the Scoreboard. Let us know about your work by emailing us at nab@numenta.org or submitting a pull request.

This readme is a brief overview and contains details for setting up NAB. Please refer to the following for more details about NAB scoring, data, and motivation:

We encourage you to publish your results on running NAB, and share them with us at nab@numenta.org. Please cite the following publication when referring to NAB:

Ahmad, S., Lavin, A., Purdy, S., & Agha, Z. (2017). Unsupervised real-time anomaly detection for streaming data. Neurocomputing, Available online 2 June 2017, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2017.04.070

Scoreboard

The NAB scores are normalized such that the maximum possible is 100.0 (i.e. the perfect detector), and a baseline of 0.0 is determined by the "null" detector (which makes no detections).

DetectorStandard ProfileReward Low FPReward Low FN
Perfect100.0100.0100.0
ARTime74.965.180.4
Numenta HTM*70.5-69.762.6-61.775.2-74.2
CAD OSE69.967.073.2
earthgecko Skyline58.246.263.9
KNN CAD58.043.464.8
Relative Entropy54.647.658.8
Random Cut Forest ****51.738.459.7
Twitter ADVec v1.0.047.133.653.5
Windowed Gaussian39.620.947.4
Etsy Skyline35.727.144.5
Bayesian Changepoint**17.73.232.2
EXPoSE16.43.226.9
Random***11.01.219.5
Null0.00.00.0

As of NAB v1.0

* From NuPIC version 1.0 (available on PyPI); the range in scores represents runs using different random seeds.

** The original algorithm was modified for anomaly detection. Implementation details are in the detector's code.

*** Scores reflect the mean across a range of random seeds. The spread of scores for each profile are 7.95 to 16.83 for Standard, -1.56 to 2.14 for Reward Low FP, and 11.34 to 23.68 for Reward Low FN.

**** We have included the results for RCF using an AWS proprietary implementation; even though the algorithm code is not open source, the algorithm description is public and the code we used to run NAB on RCF is open source.

† Algorithm was an entry to the 2016 NAB Competition.

Please see the wiki section on contributing algorithms for discussion on posting algorithms to the scoreboard.

Corpus

The NAB corpus of 58 timeseries data files is designed to provide data for research in streaming anomaly detection. It is comprised of both real-world and artifical timeseries data containing labeled anomalous periods of behavior.

The majority of the data is real-world from a variety of sources such as AWS server metrics, Twitter volume, advertisement clicking metrics, traffic data, and more. All data is included in the repository, with more details in the data readme. Please contact us at nab@numenta.org if you have similar data (ideally with known anomalies) that you would like to see incorporated into NAB.

The NAB version will be updated whenever new data (and corresponding labels) is added to the corpus or other significant changes are made.

Additional Scores

For comparison, here are the NAB V1.0 scores for some additional flavors of HTM.

DetectorStandard ProfileReward Low FPReward Low FN
Numenta HTMusing NuPIC v0.5.6*70.163.174.3
nab-comportex64.658.869.6
NumentaTM HTM*64.656.769.2
HTM Java56.850.761.4
Numenta HTM*, no likelihood53.6234.1561.89

* From NuPIC version 0.5.6 (available on PyPI).

† Algorithm was an entry to the 2016 NAB Competition.

Installing NAB

Supported Platforms

Other platforms may work. NAB has been tested on Windows 10 but is not officially supported.

Initial requirements

You need to manually install the following:

Download this repository

Use the Github links provided in the right sidebar.

Install NAB

Pip:

From inside the checkout directory:

pip install -r requirements.txt
  pip install . --user

If you want to manage dependency versions yourself, you can skip dependencies with:

pip install . --user --no-deps

If you are actively working on the code and are familiar with manual PYTHONPATH setup:

  pip install -e . --install-option="--prefix=/some/other/path/"
Anaconda:
conda env create

Usage

There are several different use cases for NAB:

  1. If you want to look at all the results we reported in the paper, there is no need to run anything. All the data files are in the data subdirectory and all individual detections for reported algorithms are checked in to the results subdirectory. Please see the README files in those locations.

  2. If you want to plot some of the results, please see the README in the scripts directory for scripts/plot.py

  3. If you have your own algorithm and want to run the NAB benchmark, please see the NAB Entry Points section in the wiki. (The easiest option is often to simply run your algorithm on the data and output results in the CSV format we specify. Then run the NAB scoring algorithm to compute the final scores. This is how we scored the Twitter algorithm, which is written in R.)

  4. If you are a NuPIC user and want to run the Numenta HTM detector follow the directions below to "Run HTM with NAB".

  5. If you want to run everything including the bundled Skyline detector follow the directions below to "Run full NAB". Note that this will take hours as the Skyline code is quite slow.

  6. If you want to run NAB on one or more data files (e.g. for debugging) follow the directions below to "Run a subset of NAB".

Run a detector on NAB
cd /path/to/nab
python run.py -d expose --detect --optimize --score --normalize

This will run the EXPoSE detector only and produce normalized scores. Note that by default it tries to use all the cores on your machine. The above command should take about 20-30 minutes on a current powerful laptop with 4-8 cores. For debugging you can run subsets of the data files by modifying and specifying specific label files (see section below). Please type:

python run.py --help

to see all the options.

Running non-Python 3 detectors

NAB is a Python 3 framework, and can only integrate Python 3 detectors. The following detectors must be run outside the NAB runtime and integrated for scoring in a later step. These detectors include:

numenta (Python 2)
numentaTM (Python 2)
htmjava (Python 2 / Java)
twitterADVec (R)
random_cut_forest (AWS Kinesis Analytics)

Instructions on how to run the each detector in their native environment can be found in the nab/detectors/${name} directory. The Python 2 HTM detectors are also provided within a docker image, available with docker pull numenta/nab:py2.7.

Run full NAB
cd /path/to/nab
python run.py

This will run all detectors available in this repository and produce results files. To run non-Python3 detectors see "Running non-Python3 detectors" above.

Note: this option may take many many hours to run.

Run subset of NAB data files

For debugging it is sometimes useful to be able to run your algorithm on a subset of the NAB data files or on your own set of data files. You can do that by creating a custom combined_windows.json file that only contains labels for the files you want to run. This new file should be in exactly the same format as combined_windows.json except it would only contain windows for the files you are interested in.

Example: an example file containing two files is in labels/combined_windows_tiny.json. The following command shows you how to run NAB on a subset of labels:

cd /path/to/nab
python run.py -d expose --detect --windowsFile labels/combined_windows_tiny.json

This will run the detect phase of NAB on the data files specified in the above JSON file. Note that scoring and normalization are not supported with this option. Note also that you may see warning messages regarding the lack of labels for other files. You can ignore these warnings.