Benchmarks of approximate nearest neighbor libraries in Python
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 862 commits behind erikbern:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


Benchmarking nearest neighbors

This project contains some tools to benchmark various implementations of approximate nearest neighbor (ANN) search.


Data sets


Doing fast searching of nearest neighbors in high dimensional spaces is an increasingly important problem, but with little attempt at objectively comparing methods.


Clone the repo and run bash This will install all libraries as well as downloading and preprocessing all data sets. It could take a while. It has been tested in Ubuntu 14.04.

There is also a Docker image available under erikbern/ann containing all libraries and data sets.


  • Everyone is welcome to submit pull requests with tweaks and changes to how each library is being used.
  • In particular: if you are the author of any of these libraries, and you think the benchmark can be improved, consider making the improvement and submitting a pull request.
  • This is meant to be an ongoing project and represent the current state.
  • Make everything easy to replicate, including installing and preparing the datasets.
  • To make it simpler, look only at the precision-performance tradeoff.
  • Try many different values of parameters for each library and ignore the points that are not on the precision-performance frontier.
  • High-dimensional datasets with approximately 100-1000 dimensions. This is challenging but also realistic. Not more than 1000 dimensions because those problems should probably be solved by doing dimensionality reduction separately.
  • Use cosine similarity. For libraries where this is not supported, this is trivially achieved by normalizing all vectors.
  • Use single core benchmarks. I believe most real world scenarios could be parallelized in other ways (eg. do multiple queries in parallel).
  • Avoid extremely costly index building (more than several hours).
  • Focus on datasets that fit in RAM. Out of core ANN could be the topic of a later comparison.
  • Do proper train/test set of index data and query points.


Currently this is on the full 1.15M vectors from GloVe (100 dimensions, trained from tweets). It's run on a c4.4xlarge instance on EC2. This is very much a work in progress... more results coming later!