New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to "Raw" Index using Anserini? #1353
Comments
Hi @Mistobaan see https://github.com/castorini/pyserini/#how-do-i-search-my-own-documents You'll need to disable stopword removal and stemming, and tokenize only on spaces. This is doable in Lucene but I'm not sure the right hooks are exposed in Python. No, you cannot specify the weight of each term. Hope this helps! |
Thank @lintool. Yes definitely helped in finding this document:
With the new keyword/term (Lucene Analyzer) I was able to find out the Lucene feature to make the custom term frequency: https://issues.apache.org/jira/browse/LUCENE-7854
Now the follow-up question is: If I create the custom/index in anserini/lucene how does it translate to ElasticSearch (to have distributed computing)?
|
This ^^^^ There's no way for ES to "import" a Lucene index directly. It needs to re-index from scratch. So, you'll need to set the tokenizer appropriately from the ES end, by twiddling with the ES configs, like here: https://github.com/castorini/anserini/blob/2d8359c917c0a54d2b239cd02e289b3b4790a6bb/src/main/resources/elasticsearch/index-config.cord19.json |
Thanks! |
To make sure I understand and master the full pipeline I want to create the tokens from the documents myself (i.e. skipping the built-in stemming and tokenization) and index the generated tokens using the Lucene index.
My questions:
The text was updated successfully, but these errors were encountered: