Anserini is a toolkit for reproducible information retrieval research. By building on Lucene, we aim to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Among other goals, our effort aims to be the opposite of this.* Anserini grew out of a reproducibility study of various open-source retrieval engines in 2016 (Lin et al., ECIR 2016). See Yang et al. (SIGIR 2017) and Yang et al. (JDIQ 2018) for overviews.
Many Anserini features are exposed in the Pyserini Python interface. If you're looking for basic indexing and search capabilities, you might want to start there. A low-effort way to try out Anserini is to look at our online notebooks, which will allow you to get started with just a few clicks. For convenience, we've pre-built a few common indexes, available to download here.
You'll need Java 11 and Maven 3.3+ to build Anserini.
Clone our repo with the --recurse-submodules
option to make sure the eval/
submodule also gets cloned (alternatively, use git submodule update --init
).
Then, build using using Maven:
mvn clean package appassembler:assemble
The tools/
directory, which contains evaluation tools and other scripts, is actually this repo, integrated as a Git submodule (so that it can be shared across related projects).
Build as follows (you might get warnings, but okay to ignore):
cd tools/eval && tar xvfz trec_eval.9.0.4.tar.gz && cd trec_eval.9.0.4 && make && cd ../../..
cd tools/eval/ndeval && make && cd ../../..
With that, you should be ready to go!
Anserini is designed to support experiments on various standard IR test collections out of the box.
The following experiments are backed by rigorous end-to-end regression tests with run_regression.py
and the Anserini reproducibility promise.
For the most part, these runs are based on default parameter settings.
These pages can also serve as guides to reproduce our results. See individual pages for details!
- Regressions for Disks 1 & 2 (TREC 1-3), Disks 4 & 5 (TREC 7-8, Robust04), AQUAINT (Robust05)
- Regressions for the New York Times Corpus (Core17), the Washington Post Corpus (Core18)
- Regressions for Wt10g, Gov2
- Regressions for ClueWeb09 (Category B), ClueWeb12-B13, ClueWeb12
- Regressions for Tweets2011 (MB11 & MB12), Tweets2013 (MB13 & MB14)
- Regressions for Complex Answer Retrieval (CAR17): v1.5, v2.0, v2.0 with doc2query
- Regressions for MS MARCO (V1) Passage Ranking:
- Unsupervised lexical: baselines, doc2query, doc2query-T5
- Learned sparse lexical (uniCOIL family): uniCOIL noexp, uniCOIL with d2q-T5, uniCOIL with TILDE
- Learned sparse lexical (other): DeepImpact, SPLADEv2, SPLADE-distill CoCodenser-medium
- Regressions for MS MARCO (V1) Document Ranking:
- Unsupervised lexical, complete doc*: baselines, doc2query-T5
- Unsupervised lexical, segmented doc*: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp, uniCOIL with d2q-T5
- Regressions for TREC 2019 Deep Learning Track, Passage Ranking:
- Unsupervised lexical: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp, uniCOIL with d2q-T5
- Regressions for TREC 2019 Deep Learning Track, Document Ranking:
- Unsupervised lexical, complete doc*: baselines, doc2query-T5
- Unsupervised lexical, segmented doc*: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp, uniCOIL with d2q-T5
- Regressions for TREC 2020 Deep Learning Track, Passage Ranking:
- Unsupervised lexical: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp, uniCOIL with d2q-T5
- Regressions for TREC 2020 Deep Learning Track, Document Ranking:
- Unsupervised lexical, complete doc*: baselines, doc2query-T5
- Unsupervised lexical, segmented doc*: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp, uniCOIL with d2q-T5
- Regressions for MS MARCO (V2) Passage Ranking:
- Unsupervised lexical, original corpus: baselines, doc2query-T5
- Unsupervised lexical, augmented corpus: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp zero-shot, uniCOIL with d2q-T5 zero-shot
- Regressions for MS MARCO (V2) Document Ranking:
- Unsupervised lexical, complete doc: baselines, doc2query-T5
- Unsupervised lexical, segmented doc: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp zero-shot, uniCOIL with d2q-T5 zero-shot
- Regressions for TREC 2021 Deep Learning Track, Passage Ranking:
- Unsupervised lexical, original corpus: baselines, doc2query-T5
- Unsupervised lexical, augmented corpus: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp zero-shot, uniCOIL with d2q-T5 zero-shot
- Regressions for TREC 2021 Deep Learning Track, Document Ranking:
- Unsupervised lexical, complete doc: baselines, doc2query-T5
- Unsupervised lexical, segmented doc: baselines, doc2query-T5
- Learned sparse lexical: uniCOIL noexp zero-shot, uniCOIL with d2q-T5 zero-shot
- Regressions for TREC News Tracks (Background Linking Task): 2018, 2019, 2020
- Regressions for FEVER Fact Verification
- Regressions for NTCIR-8 ACLIA (IR4QA subtask, Monolingual Chinese)
- Regressions for CLEF 2006 Monolingual French
- Regressions for TREC 2002 Monolingual Arabic
- Regressions for FIRE 2012: Monolingual Bengali, Monolingual Hindi, Monolingual English
- Regressions for Mr. TyDi (v1.1) baselines : ar, bn, en, fi, id, ja, ko, ru, sw, te, th
- Regressions for BEIR (v1.0.0):
- ArguAna: baseline, SPLADE-distill CoCodenser-medium
- Climate-FEVER: SPLADE-distill CoCodenser-medium
- DBPedia: SPLADE-distill CoCodenser-medium
- FEVER: SPLADE-distill CoCodenser-medium
- FiQA-2018: SPLADE-distill CoCodenser-medium
- HotpotQA: SPLADE-distill CoCodenser-medium
- NFCorpus: SPLADE-distill CoCodenser-medium
- NQ: SPLADE-distill CoCodenser-medium
- Quora: SPLADE-distill CoCodenser-medium
- SCIDOCS: SPLADE-distill CoCodenser-medium
- SciFact: SPLADE-distill CoCodenser-medium
- TREC-COVID: SPLADE-distill CoCodenser-medium
- Touche2020: SPLADE-distill CoCodenser-medium
The experiments described below are not associated with rigorous end-to-end regression testing and thus provide a lower standard of reproducibility. For the most part, manual copying and pasting of commands into a shell is required to reproduce our results.
- Reproducing BM25 baselines for MS MARCO Passage Ranking
- Reproducing BM25 baselines for MS MARCO Document Ranking
- Reproducing baselines for the MS MARCO Document Ranking Leaderboard
- Reproducing doc2query results (MS MARCO Passage Ranking and TREC-CAR)
- Reproducing docTTTTTquery results (MS MARCO Passage and Document Ranking)
- Notes about reproduction issues with MS MARCO Document Ranking w/ docTTTTTquery
- Reproducing BM25 baselines on the MS MARCO V2 Collections
- Indexing AI2's COVID-19 Open Research Dataset
- Baselines for the TREC-COVID Challenge
- Baselines for the TREC-COVID Challenge using doc2query
- Ingesting AI2's COVID-19 Open Research Dataset into Solr and Elasticsearch
- Working with the 20 Newsgroups Dataset
- Guide to BM25 baselines for the FEVER Fact Verification Task
- Guide to reproducing "Neural Hype" Experiments
- Guide to running experiments on the AI2 Open Research Corpus
- Experiments from Yang et al. (JDIQ 2018)
- Runbooks for TREC 2018: [Anserini group] [h2oloo group]
- Runbook for ECIR 2019 paper on axiomatic semantic term matching
- Runbook for ECIR 2019 paper on cross-collection relevance feedback
- Use Anserini in Python via Pyserini
- Anserini integrates with SolrCloud via Solrini
- Anserini integrates with Elasticsearch via Elasterini
- Anserini supports approximate nearest-neighbor search on arbitrary dense vectors with Lucene
If you've found Anserini to be helpful, we have a simple request for you to contribute back.
In the course of reproducing baseline results on standard test collections, please let us know if you're successful by sending us a pull request with a simple note, like what appears at the bottom of the page for Disks 4 & 5.
Reproducibility is important to us, and we'd like to know about successes as well as failures.
Since the regression documentation is auto-generated, pull requests should be sent against the raw templates.
Then the regression documentation can be generated using the bin/build.sh
script.
In turn, you'll be recognized as a contributor.
Beyond that, there are always open issues we would appreciate help on!
- v0.14.2: March 24, 2022 [Release Notes]
- v0.14.1: February 27, 2022 [Release Notes]
- v0.14.0: January 10, 2022 [Release Notes]
- v0.13.5: November 2, 2021 [Release Notes]
- v0.13.4: October 22, 2021 [Release Notes]
- v0.13.3: August 22, 2021 [Release Notes]
- v0.13.2: July 20, 2021 [Release Notes]
- v0.13.1: June 29, 2021 [Release Notes]
- v0.13.0: June 22, 2021 [Release Notes]
- v0.12.0: April 29, 2021 [Release Notes]
- v0.11.0: February 13, 2021 [Release Notes]
- v0.10.1: January 8, 2021 [Release Notes]
- v0.10.0: November 25, 2020 [Release Notes]
- v0.9.4: June 25, 2020 [Release Notes]
- v0.9.3: May 26, 2020 [Release Notes]
- v0.9.2: May 14, 2020 [Release Notes]
- v0.9.1: May 6, 2020 [Release Notes]
- v0.9.0: April 18, 2020 [Release Notes]
- v0.8.1: March 22, 2020 [Release Notes]
- v0.8.0: March 11, 2020 [Release Notes]
- v0.7.2: January 25, 2020 [Release Notes]
- v0.7.1: January 9, 2020 [Release Notes]
- v0.7.0: December 13, 2019 [Release Notes]
- v0.6.0: September 6, 2019 [Release Notes][Known Issues]
- v0.5.1: June 11, 2019 [Release Notes]
- v0.5.0: June 5, 2019 [Release Notes]
- v0.4.0: March 4, 2019 [Release Notes]
- v0.3.0: December 16, 2018 [Release Notes]
- v0.2.0: September 10, 2018 [Release Notes]
- v0.1.0: July 4, 2018 [Release Notes]
- Anserini was upgraded to Java 11 at commit
17b702d
(7/11/2019) from Java 8. Maven 3.3+ is also required. - Anserini was upgraded to Lucene 8.0 as of commit
75e36f9
(6/12/2019); prior to that, the toolkit uses Lucene 7.6. Based on preliminary experiments, query evaluation latency has been much improved in Lucene 8. As a result of this upgrade, results of all regressions have changed slightly. To reproducible old results from Lucene 7.6, use v0.5.1.
- Jimmy Lin, Matt Crane, Andrew Trotman, Jamie Callan, Ishan Chattopadhyaya, John Foley, Grant Ingersoll, Craig Macdonald, Sebastiano Vigna. Toward Reproducible Baselines: The Open-Source IR Reproducibility Challenge. ECIR 2016.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the Use of Lucene for Information Retrieval Research. SIGIR 2017.
- Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality, 10(4), Article 16, 2018.
This research is supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Previous support came from the U.S. National Science Foundation under IIS-1423002 and CNS-1405688. Any opinions, findings, and conclusions or recommendations expressed do not necessarily reflect the views of the sponsors.