You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use a multi-GPU index (with the standard IndexReplicas multi-GPU mode).
I have tensors that are already on the GPU, and I am trying to use them to train/add to/search the index.
I am doing import faiss.contrib.torch_utils to get GPU support, and this works for a single-GPU setup. Unfortunately, it does not seem to work for a multi-GPU setup. I have included more details below.
Traceback (mostrecentcalllast):
File"helpers/create-faiss-database.py", line43, in<module>index.train(features)
File"/home/luke/anaconda3/envs/faiss/lib/python3.8/site-packages/faiss/contrib/torch_utils.py", line195, intorch_replacement_trainasserthasattr(self, 'getDevice'), 'GPU tensor on CPU index not allowed'AssertionError: GPUtensoronCPUindexnotallowed
Honestly, this is not a big deal for the training and adding vectors to the index. However, it is a big deal for searching the index, because my data is already on GPU. I could transfer it to CPU, but that would be slow. If I train/add to the index using CPU numpy arrays and then search with a torch tensor, I get the same error.
As a note, I can see with nvidia-smi that the index is correctly using the GPUs, even when I add them as CPU bumpy arrays.
I feel like there's something obvious here that I'm missing. Could someone provide a bit of help?
The text was updated successfully, but these errors were encountered:
I used Anaconda, I installed bellow, it worked. If you use Docker, please install Miniconda. conda install faiss-gpu cudatoolkit=11.1 -c pytorch-gpu conda install -c anaconda pytorch-gpu
The IndexReplicas is a CPU index. There is no native GPU-only index support.
Nb that for most use cases, CPU->GPU transfer or even a roundtrip is not the bottleneck. The search time is the bottleneck.
Summary
I am trying to use a multi-GPU index (with the standard
IndexReplicas
multi-GPU mode).I have tensors that are already on the GPU, and I am trying to use them to train/add to/search the index.
I am doing
import faiss.contrib.torch_utils
to get GPU support, and this works for a single-GPU setup. Unfortunately, it does not seem to work for a multi-GPU setup. I have included more details below.Platform
OS: Ubuntu 20.04.1
Faiss version: 1.7.1
Installed from: anaconda
Running on:
Interface:
Reproduction instructions
Here is the setup:
Now, the following code works (single GPU):
I would expect the following code to work (multi-GPU):
However, I get the following error:
Honestly, this is not a big deal for the training and adding vectors to the index. However, it is a big deal for searching the index, because my data is already on GPU. I could transfer it to CPU, but that would be slow. If I train/add to the index using CPU numpy arrays and then search with a torch tensor, I get the same error.
As a note, I can see with
nvidia-smi
that the index is correctly using the GPUs, even when I add them as CPU bumpy arrays.I feel like there's something obvious here that I'm missing. Could someone provide a bit of help?
The text was updated successfully, but these errors were encountered: