Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FAISS indexing parameters #44

Closed
littlewine opened this issue May 25, 2021 · 6 comments
Closed

FAISS indexing parameters #44

littlewine opened this issue May 25, 2021 · 6 comments

Comments

@littlewine
Copy link

littlewine commented May 25, 2021

Dear authors,
thank you for your nice work and providing the code repository.
I would like to use your model to index a collection using faiss. However I see a few parameters (in the faiss indexing example command) that I do not understand.

python -m colbert.index_faiss \
--index_root /root/to/indexes/ --index_name MSMARCO.L2.32x200k \
--partitions 32768 --sample 0.3 \
--root /root/to/experiments/ --experiment MSMARCO-psg

One is sample and the other partitions. My guess is that the second one splits the generated index file to different partitions, hence not so important (correct me if Im wrong), but what about sample?

Also, is the --root /root/to/experiments/ expected to be the colbert code directory (this repo)?

Thank you

@okhat
Copy link
Collaborator

okhat commented May 26, 2021

Thank you for the kind words!

Partitions is very important: it means the number of centroids used by FAISS for indexing and search. Higher means slower FAISS indexing but faster retrieval. You can make the number 2x or 4x smaller and it would still be fine.

The sample dictates how much of the data is used for FAISS indexing, so here it's 30%. If you drop this parameter completely, the default will internally be 5%. More is better, but 5--30% is enough.

@littlewine
Copy link
Author

so as far as I understand sample is used for indexing, but it does not mean that using a sample param of 0.3 will index only 30% of the documents. correct?

@okhat
Copy link
Collaborator

okhat commented May 27, 2021

Yes, all documents will be indexed! Irrespective of what you choose for sample, all embeddings are going to be stored.

Sampling just dictates the not-so-critical aspect of how to create internal representations without too much cost.

@littlewine
Copy link
Author

Another semi-related indexing parameter question:

What does the --doc_maxlen 180 parameter value do? From my understanding, it cuts the passages to 180 tokens, but what if a passage/document is >180? Does it throw away the rest of the document (FirstP), or does it just split the document into multiple passages and search across all of them (MaxP)?

Also was the parameter of the value used in the original work 180? I am planning to use ColBERT for indexing other datasets apart from MSMarco-passage so I am not sure whether in fact I would have to retrain everything from scratch.

After trying to index a collection using the checkpoint from the original work transformers gave me this warning, which I guess should alarm me:

Some weights of ColBERT were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['linear.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Some weights of the model checkpoint at bert-base-uncased were not used when initializing ColBERT: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
- This IS expected if you are initializing ColBERT from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing ColBERT from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

If I understand correctly, trying to index using a checkpoint created with a different --doc_maxlen would likely create inconsistencies and result to a worse representation of the corpus.

Thank you again for your help! :)

@littlewine littlewine reopened this Jun 2, 2021
@okhat
Copy link
Collaborator

okhat commented Jun 3, 2021

By default it's FirstP. You'll have to split the documents up if you want to implement MaxP on top of this.

You don't have to retrain. Just split up the passages to 100--150 words (with Python whitespace split) and select an appropriate --doc_maxlen in the range 180--256. It should work fine.

@okhat okhat closed this as completed Jun 10, 2021
@Lim-Sung-Jun
Copy link

Lim-Sung-Jun commented Nov 7, 2022

The sample dictates how much of the data is used for FAISS indexing, so here it's 30%. If you drop this parameter completely, the default will internally be 5%. More is better, but 5--30% is enough.

Yes, all documents will be indexed! Irrespective of what you choose for sample, all embeddings are going to be stored.

Sampling just dictates the not-so-critical aspect of how to create internal representations without too much cost.

hello,
I have 2 questions about sampling and index.add

  1. sampling
    What I've understood so far is that sampling is just for analysing the distribution of documents(collection) to use faiss
    is it right?

if it's right then, we only use 30% for decreasing the cost?
because if we sampling 100% then the index.train() will take too much time?

  1. index.add
    why do we feed only three ".pt" files to the index.add function?
    index.add(sub_collection)

can't we just feed all files to the function?

thank you

+) and I also want to know about below things..

  • the role of slice in faiss_index.py
  • the role of chunck_size in encoder.py
  • how the .pt is made? what batch size? subset size?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants