Home

Awesome

Benchmarking nearest neighbors

Build Status

Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem with notably few empirical attempts at comparing approaches in an objective way, despite a clear need for such to drive optimization forward.

This project contains tools to benchmark various implementations of approximate nearest neighbor (ANN) search for selected metrics. We have pre-generated datasets (in HDF5 format) and prepared Docker containers for each algorithm, as well as a test suite to verify function integrity.

Evaluated

Data sets

We have a number of precomputed data sets in HDF5 format. All data sets have been pre-split into train/test and include ground truth data for the top-100 nearest neighbors.

DatasetDimensionsTrain sizeTest sizeNeighborsDistanceDownload
DEEP1B969,990,00010,000100AngularHDF5 (3.6GB)
Fashion-MNIST78460,00010,000100EuclideanHDF5 (217MB)
GIST9601,000,0001,000100EuclideanHDF5 (3.6GB)
GloVe251,183,51410,000100AngularHDF5 (121MB)
GloVe501,183,51410,000100AngularHDF5 (235MB)
GloVe1001,183,51410,000100AngularHDF5 (463MB)
GloVe2001,183,51410,000100AngularHDF5 (918MB)
Kosarak27,98374,962500100JaccardHDF5 (33MB)
MNIST78460,00010,000100EuclideanHDF5 (217MB)
MovieLens-10M65,13469,363500100JaccardHDF5 (63MB)
NYTimes256290,00010,000100AngularHDF5 (301MB)
SIFT1281,000,00010,000100EuclideanHDF5 (501MB)
Last.fm65292,38550,000100AngularHDF5 (135MB)

Results

These are all as of April 2023, running all benchmarks on a r6i.16xlarge machine on AWS with --parallelism 31 and hyperthreading disabled. All benchmarks are single-CPU.

glove-100-angular

glove-100-angular

sift-128-euclidean

glove-100-angular

fashion-mnist-784-euclidean

fashion-mnist-784-euclidean

nytimes-256-angular

nytimes-256-angular

gist-960-euclidean

gist-960-euclidean

glove-25-angular

glove-25-angular

TODO: update plots on http://ann-benchmarks.com.

Install

The only prerequisite is Python (tested with 3.10.6) and Docker.

  1. Clone the repo.
  2. Run pip install -r requirements.txt.
  3. Run python install.py to build all the libraries inside Docker containers (this can take a while, like 10-30 minutes).

Running

  1. Run python run.py (this can take an extremely long time, potentially days)
  2. Run python plot.py or python create_website.py to plot results.
  3. Run python data_export.py --out res.csv to export all results into a csv file for additional post-processing.

You can customize the algorithms and datasets as follows:

Including your algorithm

Add your algorithm in the folder ann_benchmarks/algorithms/{YOUR_IMPLEMENTATION}/ by providing

Check the available implementations for inspiration.

Principles

Authors

Built by Erik Bernhardsson with significant contributions from Martin Aumüller and Alexander Faithfull.

Related Publication

Design principles behind the benchmarking framework are described in the following publications:

Related Projects