Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can PyTorch Tensors be used as the index? #561

Closed
moskomule opened this issue Aug 4, 2018 · 9 comments
Closed

Can PyTorch Tensors be used as the index? #561

moskomule opened this issue Aug 4, 2018 · 9 comments
Labels

Comments

@moskomule
Copy link

moskomule commented Aug 4, 2018

Hi, I'm trying to use FAISS with PyTorch (both are the latest versions available via conda install -c pytorch ...). In https://github.com/facebookresearch/faiss/blob/master/gpu/test/test_pytorch_faiss.py, the index is constructed as follows.

xq = faiss.randn(nq * d, 1234).reshape(nq, d)
xb = faiss.randn(nb * d, 1235).reshape(nb, d)
res = faiss.StandardGpuResources()
index = faiss.GpuIndexFlatIP(res, d)
index.add(xb)

Is it possible to use PyTorch's Tensor as xb above? Thank you.


At least index.add(torch_tensor) doesn't work.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-10-12010aa60bdb> in <module>()
      1 index = faiss.IndexFlatL2(d)
----> 2 index.add(xb_torch)

~/.miniconda/lib/python3.6/site-packages/faiss/__init__.py in replacement_add(self, x)
     95 
     96     def replacement_add(self, x):
---> 97         assert x.flags.contiguous
     98         n, d = x.shape
     99         assert d == self.d

AttributeError: 'Tensor' object has no attribute 'flags'
@beauby beauby added the question label Aug 4, 2018
@beauby
Copy link
Contributor

beauby commented Aug 4, 2018

No, you'll need to convert your tensor to a numpy array.

@beauby beauby closed this as completed Aug 4, 2018
@moskomule
Copy link
Author

Thank you. 😢

@beauby
Copy link
Contributor

beauby commented Aug 5, 2018

@moskomule Note that, as mentioned here, it is just a matter of doing index.add(xb.numpy()) instead of index.add(xb).

@moskomule
Copy link
Author

I see. But in my case, xbis a gpu tensor so I'm just afraid the bottleneck (GPU->CPU->GPU) and extra memory consumption.

@mdouze
Copy link
Contributor

mdouze commented Aug 16, 2018

See here on how to pass pytorch GPU tensors without copying them:
https://github.com/facebookresearch/faiss/blob/master/gpu/test/test_pytorch_faiss.py

@moskomule
Copy link
Author

@mdouze Thank you. You mean tensor.storage().data_ptr()? Still unclear to me how to use it as the index.

@gemfield
Copy link

gemfield commented Sep 8, 2020

The link above is incorrect now. Access https://github.com/facebookresearch/faiss/blob/master/faiss/gpu/test/test_pytorch_faiss.py instead.
You can also have a look at https://github.com/DeepVAC/deepvac/blob/master/deepvac/syszux_feature_vector.py, the class NamesPathsClsFeatureVectorByFaissPytorch may help.

@skei0
Copy link

skei0 commented Dec 22, 2020

@wickedfoo
Copy link
Contributor

@skei0 yes, import faiss.contrib.torch_utils (https://github.com/facebookresearch/faiss/blob/master/contrib/torch_utils.py) and you can simply pass a pytorch tensor as xb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants