-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating an index for batch indexing #5
Comments
Unfortunately it is not possible! The reason for that is that BM25 calculates the individual score based on the term frequency (tf), number of documents (|D|) and average document length (L_avg); thus, any new document added will change the information previously calculated. As for a multi-stage approach (so you can remove the data from memory), I've thought about how to deal with memory with incremental passes and chunked indexing, but unfortunately I did not have the chance to come up with better ideas that would be straightforward to implement and use. I think it's good to keep this issue open if someone has a suggestion & potential PR. |
one major hurdle is that we need multiple pass on the corpus tokens (once to get term freq, and average length, a second for actual scoring). Although it might be possible to use |
That said, if your goal is to reduce memory usage, you can also consider passing an iterator to the tokenizer instead of a list in memory, some pseudo code: def load_corpus(fname):
with open(fname) as f:
for line in f:
yield json.loads(line)['text']
corpus_tokens = bm25s.tokenize(load_corpus(fname), stemmer=stemmer) |
@xhluca Great job! You obviously put a lot of time in this! I will be glad to spend some time and see if I can bring this to your implementation. # ........
def index_corpus(corpus_batch):
corpus_tokens = bm25s.tokenize([d['text'] for d in corpus_batch], stopwords="en", stemmer=stemmer)
retriever.update(corpus_tokens) #udpate given the current state -> change to new state as if you would do .index(full_corpus)
#.......
# 1)
index_corpus(batch_0) <- inits and calculates idf etc..
index_corpus(batch_1) <- updated and calculates new idf etc...
# 2)
index_corpus(batch_0 + batch_1) (1) and (2) give the same index. |
Hey that'd be an interesting addition! I'd be happy to review a PR with the incremental feature, docstrings and tests for the incremental update part. That said, one reason I have not touched it is due to how BM25 is calculated, which is not directly compatible with precomputed BM25 scores. For example, the term frequency, average length and document count changes after an incremental update. Do you have an idea how to update existing pre-computed scores in this case? |
look at my code in the link. |
No worries, take your time!
But wouldn't that be not very efficient if you process by batch? For example, if you have 100M docs, and you want to update by adding 10; it'd be very fast to update a term frequency dictionary and update the average doc length. However, updating the IDF calculation of all the 100M docs would take a lot more time since you can't increment the values, instead you need to recalculate the idf and tf component. Then, repeating that for each batch would simply make it very computationally inefficient (and would also need to keep everything in memory... which defeats the purpose of batched updates). |
Well yes, but this is the price to pay if you want to batch... I dont have solution otherwise at this moment. Its either you can do it by batch save the state and get rid of those documents or you need to retrain every time you have new batch documents on the whole corpus anyway. |
Thanks. I've been thinking about this but I was hoping I missed some solution... In this case i guess index updates (i.e. for batching) would not be possible, but moving forward, I think it will be nice to eventually support low-memory tokenizing (i.e. a streaming tokenizer that |
Sure, batching updates are not possible by incrementing the idf values. Still in my use case I don't mind recalculating everything but with only new documents as the old ones are already archived, its a trade off. What happens if you need to add 100 new documents to the index? You need to have them all at one place.. |
Makes sense! I'll close now. Thank you! |
Got idea about the batching: # 1)
index_corpus(batch_0) <- inits and calculates doc_frequencies, etc.. but not idfs
index_corpus(batch_1) <- updates
index_corpus.flush() <- calculates idfs etc.
# 2)
index_corpus(batch_0 + batch_1)
index_corpus.flush() 1 and 2 would be the same. |
I think it would make sense to make it into a util class, e.g.: idfs = {}
tfcs = {}
for batch in batches:
idfs = bm25s.utils.corpus.update_idfs(idfs, batch)
tfcs = = bm25s.utils.corpus.update_tfcs(tfcs, batch) Then, we could have a method of retriever.index_precomputed(idfs=idfs, tfcs=tfcs) If that method works well (passes new unittests, as fast as original, does not increase memory consumption), then we can rewrite |
Yeah makes sense :) |
Hi! Is it possible to use update an existing index, e.g. for batch indexing and larger-than-memory datasets?
Unfortunately, this does not work:
The text was updated successfully, but these errors were encountered: