Skip to content

Latest commit

 

History

History
112 lines (84 loc) · 7.4 KB

regressions-dl21-doc.md

File metadata and controls

112 lines (84 loc) · 7.4 KB

Anserini Regressions: TREC 2021 Deep Learning Track (Document)

Models: various bag-of-words approaches on complete documents

This page describes experiments, integrated into Anserini's regression testing framework, on the TREC 2021 Deep Learning Track document ranking task using the MS MARCO V2 document collection.

Note that the NIST relevance judgments provide far more relevant documents per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast). For additional instructions on working with MS MARCO V2 document collection, refer to this page.

Note that there are four different bag-of-words regression conditions for this task, and this page describes the following:

  • Indexing Condition: each document in the MS MARCO V2 document collection is treated as a unit of indexing
  • Expansion Condition: none

The exact configurations for these regressions are stored in this YAML file. Note that this page is automatically generated from this template as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead.

From one of our Waterloo servers (e.g., orca), the following command will perform the complete regression, end to end:

python src/main/python/run_regression.py --index --verify --search --regression dl21-doc

Indexing

Typical indexing command:

target/appassembler/bin/IndexCollection \
  -collection MsMarcoV2DocCollection \
  -input /path/to/msmarco-v2-doc \
  -index indexes/lucene-index.msmarco-v2-doc/ \
  -generator DefaultLuceneDocumentGenerator \
  -threads 18 -storePositions -storeDocvectors -storeRaw \
  >& logs/log.msmarco-v2-doc &

The value of -input should be a directory containing the compressed jsonl files that comprise the corpus. See this page for additional details.

For additional details, see explanation of common indexing options.

Retrieval

Topics and qrels are stored in src/main/resources/topics-and-qrels/. The regression experiments here evaluate on the 57 topics for which NIST has provided judgments as part of the TREC 2021 Deep Learning Track. The original data can be found here.

After indexing has completed, you should be able to perform retrieval as follows:

target/appassembler/bin/SearchCollection \
  -index indexes/lucene-index.msmarco-v2-doc/ \
  -topics src/main/resources/topics-and-qrels/topics.dl21.txt \
  -topicreader TsvInt \
  -output runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt \
  -hits 1000 -bm25 &

target/appassembler/bin/SearchCollection \
  -index indexes/lucene-index.msmarco-v2-doc/ \
  -topics src/main/resources/topics-and-qrels/topics.dl21.txt \
  -topicreader TsvInt \
  -output runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt \
  -hits 1000 -bm25 -rm3 &

target/appassembler/bin/SearchCollection \
  -index indexes/lucene-index.msmarco-v2-doc/ \
  -topics src/main/resources/topics-and-qrels/topics.dl21.txt \
  -topicreader TsvInt \
  -output runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt \
  -hits 1000 -bm25 -rocchio &

Evaluation can be performed using trec_eval:

tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m map src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default.topics.dl21.txt

tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m map src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rm3.topics.dl21.txt

tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m map src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.100 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -m recall.1000 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt
tools/eval/trec_eval.9.0.4/trec_eval -c -M 100 -m recip_rank -c -m ndcg_cut.10 src/main/resources/topics-and-qrels/qrels.dl21-doc.txt runs/run.msmarco-v2-doc.bm25-default+rocchio.topics.dl21.txt

Effectiveness

With the above commands, you should be able to reproduce the following results:

MAP@100 BM25 (default) +RM3 +Rocchio
DL21 (Doc) 0.2126 0.2452 0.2467
MRR@100 BM25 (default) +RM3 +Rocchio
DL21 (Doc) 0.8367 0.7914 0.7997
nDCG@10 BM25 (default) +RM3 +Rocchio
DL21 (Doc) 0.5116 0.5304 0.5476
R@100 BM25 (default) +RM3 +Rocchio
DL21 (Doc) 0.3195 0.3376 0.3456
R@1000 BM25 (default) +RM3 +Rocchio
DL21 (Doc) 0.6739 0.7341 0.7367

Some of these regressions correspond to official TREC 2021 Deep Learning Track "baseline" submissions:

  • d_bm25 = BM25 (default), k1=0.9, b=0.4
  • d_bm25rm3 = BM25 (default) + RM3, k1=0.9, b=0.4