diff --git a/docs/experiments-distilbert_kd.md b/docs/experiments-distilbert_kd.md index 355751d16..2e8d2b9ba 100644 --- a/docs/experiments-distilbert_kd.md +++ b/docs/experiments-distilbert_kd.md @@ -1,6 +1,6 @@ # Pyserini: Reproducing DistilBERT KD Results -This guide provides instructions to reproduce the TCT-ColBERT dense retrieval model on the MS MARCO passage ranking task, described in the following paper: +This guide provides instructions to reproduce the DistilBERT KD dense retrieval model on the MS MARCO passage ranking task, described in the following paper: > Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. [Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation .](https://arxiv.org/abs/2010.02666) arXiv:2010.02666, October 2020. diff --git a/docs/experiments-sbert.md b/docs/experiments-sbert.md index 0d1c2753e..0f453fa65 100644 --- a/docs/experiments-sbert.md +++ b/docs/experiments-sbert.md @@ -40,7 +40,7 @@ Hybrid retrieval with dense-sparse representations (without document expansion): - dense retrieval with SBERT, brute force index. - sparse retrieval with BM25 `msmarco-passage` (i.e., default bag-of-words) index. -```bas +```bash $ python -m pyserini.hsearch dense --index msmarco-passage-sbert-bf \ --encoded-queries sbert-msmarco-passage-dev-subset \ sparse --index msmarco-passage \