Skip to content

Indexing 1M vectors

mdouze edited this page Mar 1, 2017 · 33 revisions

This page gives some advice and results on datasets of around 1M vectors.

Gidelines

When the dataset is around 1m vectors, the exhaustive index becomes too slow, so a good alternative is IndexIVFFlat. It still returns exact distances but occasionally misses a neighbor because it is non-exhaustive.

Below are a few experiments on datasets of size 1M with different indexing methods. We focus on the tradeoff between:

  • speed, measured on a "Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz" with 20 threads enabled. It is reported in ms in batch mode (ie. all query vectors are handled simultaneously).
  • accuracy. We use the //1-NN recall @R// measure with R=1, 10, or 100. This is the fraction of query vectors where the actual nearest-neighbor is ranked in the R first results.

WARNING: This is an optimistic setting, running with a single thread is about 10x slower and providing queries one by one is another 2-3x slower (see the timings for sift_1M below)

The index types that are tested are referred to by their index key, as created by the index_factory.

For all indexes, we request 100 results per query.

CNN activation maps

We use 4096D descriptors extracted from 1M images, and the same from query images. The 4096D descriptors are then reduced to 256D by PCA (beforehand, the cost of the PCA transformation is not reflected in the results). The most appropriate indexes for this are:

So an IVF is better for high accuracy regimes, and IMI for lower regimes.

NOTE: IVF16384 gives slightly better results than IVF4096, but the later is significantly cheaper to do train and add's (this cost is not reflected in the measurements).

If memory is a concern, then compressed vectors can be used (with IndexIVFPQ)

IVF4096,Flat is plotted only for reference.

For example: OPQ32_128,IVF4096,PQ32 uses size of the code + id = 32 + 8 = 40 bytes per vector. This makes is total of 40MB total for the database, but there are overheads due to precomputed tables, geometric array allocation, etc.

We also use resnet-34 descriptors trained on Imagenet and computed on the 1M first images of Flickr100M, from which we remove the two upper layers to get 512D activation maps. The behavior is similar:

SIFT 1M

This is a common benchmark used in research papers. It consists of SIFT descriptors (128D) extracted from image patches.

The comments are about the same as for the CNN data.

Here we also evaluated the performance in relation with threading and batching:

  • nt1: batched, single-thread

  • nobatch/nobatch_nt1: queries are submitted 1 by 1

  • parallelqueries: queries are submitted in parallel 1 by 1, and processed by a non-batch loop.

It appears that a 1-thread run is 5x to 12x slower. To submit queries one at a time, threading is not useful.

NOTE: in this plot we report the speed in QPS (queries per second) and the x-axis is the 10-precision at 10 measure (or intersection measure). This is for easy comparison with nmslib

A comparison with the benchmarks above is not accurate because the machines are not the same. A direct comparison with nmslib shows that nmslib is faster, but uses significantly more memory. For Faiss, the build time is sub-linear and memory usage is linear.

search time 1-R@1 index size index build time
nmslib (hnsw) 0.081 s 0.8195 512 + 796 MB 173 s
IVF16384,Flat 0.538 s 0.8980 512 + 8 MB 240 s
IVF16384,Flat (Titan X) 0.059 s 0.8145 512 + 8 MB < 10 s
Flat (Titan X) 0.753 s 0.9935 512 MB 0 s

The two last results are with a GPU (Titan X).

Twitter glove

This is used as a benchmark by Annoy. The performance measure is different: intersection of the found 10-NN with the GT 10-NN.

The IVFPQ followed by a verification improves the results slightly.

Random data

We use data sampled uniformly on a unit sphere in 128D. This is the hardest data to index because there is no regularity whatsoever that the indexing method can exploit.

The measured performance is recall @ 100, otherwise it would be too low. Here, for most operating points, doing a brute-force computation is the best we can do.

Clone this wiki locally