This repository has been archived by the owner. It is now read-only.
Benchmarking approximate nearest neighbors. Note: This is an archived version from our SISAP 2017 paper, see below.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
ann_benchmarks Merge branch 'master' of Oct 11, 2017
frontend/website Print parameters. Mar 17, 2017
install Changed location of hamming and nytimes dataset. Oct 13, 2017
protocol Added a README file to the protocol directory Jul 31, 2017
results Removed the precomputed results files Jan 27, 2017
test Commented out the import statement in Apr 27, 2017
tools Small ruby script to split a dataset in a reliable way using a seed. Apr 27, 2017
.gitignore Added the C implementation of the text-based protocol Jul 31, 2017
.travis.yml remove python 3 for now Jun 21, 2015
Dockerfile Install ruby. Apr 27, 2017
algos.yaml Merge branch 'master' of Oct 11, 2017 should default to producing a website with all the p… Oct 11, 2017 Don't install sift Apr 26, 2017 Explicitly close pyplot figures once they've been saved to a file May 16, 2017 trying to add tox & setting up travis Jun 18, 2015 Added a script that summarises the number of results for each dataset Apr 28, 2017


Benchmarking nearest neighbors

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search for different metrics. Visit to see the results of this benchmark.

Note: The main development of ann-benchmarks can be found at The version provided here is what has been used for the paper ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms, M. Aumüller, E. Bernhardsson, A. Faithfull


Euclidean space

Hamming distance

Set similarity

Data sets



Hamming space

We used Spherical hashing to generate Hamming space versions of

  • SIFT
  • NYTimes

Set Similarity

We use the following three datasets from

  • Flickr
  • AOL
  • Kosarek


Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.


Clone the repo and run bash This will install all libraries. It could take a while. It has been tested in Ubuntu 16.04. We advice to run it only in a VM or in a docker container (see our Dockerfile).

Downloading and preprocessing the data sets is done by running the install/data-*.sh scripts, e.g., run bash install/ or bash install/

Experiment Setup

Running a set of algorithms with specific parameters works:

  • Check that algos.yaml contains the parameter settings that you want to test
  • To run experiments on SIFT, invoke python ann_benchmarks/main --dataset sift-data --query-dataset sift-query --distance euclidean. See python ann_benchmarks/main --help for more information on possible settings. Note that experiments can take a long time.
  • To process the results, either use python or python The website/ directory contains CSS and JS files necessary for displaying the websites generated by An example call: python --plottype recall/time --latex --scatter --outputdir website/.

Including Your Algorithm

You have two choices to include your own algorithm. If your algorithm has a Python wrapper (or is entirely written in Python), then all you need to do is to add your algorithm into ann_benchmarks/algorithms by providing a small wrapper.

If your algorithm does not provide a Python wrapper, you can include it using the SubProcess system. Find a detailed documentation on how to do it here [TBD], or checkout the wrappers written for Annoy-Hamming, Dolphinn, and MIH in the install directory.


  • Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
  • In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
  • This is meant to be an ongoing project and represent the current state.
  • Make everything easy to replicate, including installing and preparing the datasets.
  • Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
  • High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
  • No batching of queries, use single queries by default. ANN-Benchmarks saturates CPU cores by using a thread pool.
  • Avoid extremely costly index building (more than several hours).
  • Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
  • We currently support CPU-based ANN algorithms. GPU support is planned as future work.
  • Do proper train/test set of index data and query points.



Note that NMSLIB saves indices in the directory indices. If the tests are re-run using a different seed and/or a different number of queries, the content of this directory should be deleted.


The project is fully tested using Travis, with unit tests run for all different libraries and algorithms.