Code for Paper Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval. ACL2022 Main Conference, Long Paper. DCSR aims to elliminate the occurence of Contrastive Conflicts, in order to provide a more general dense retriever model for pratical use.
This code is mainly based on DPR. We thank the their authors for open sourcing their code.
The problem of Contrastive Conflicts (left) and our Solution (right). Given that relationship between queries and documents is not one-to-one, some semantically different queries might be pulled together by the same document. We propose to alter the modelling granularity from document to contextual sentence to decrease the occurence of such conflicts.
DCSR uses the same environment as DPR. Experimentally we use python==3.7.11, pytorch==1.7.1
git clone https://github.com/chengzhipanpan/DCSR
cd DCSR
pip install -r requirements.txt
pip install en_core_web_sm-3.0.0-py3-none-any.whl
# Prepare Data for Training
bash train_scripts/prepare_dataset.sh
# Prepare Data for Evaluation
bash train_scripts/prepare_wiki.sh
Sample scripts for training
# training multiset
bash train_scripts/train_multiset.sh
# training NQ
bash train_scripts/train_nq.sh
# training SQuAD
bash train_scripts/train_squad1.sh
# training Trivia
bash train_scripts/train_trivia.sh
Sample script for evaluation
bash train_scripts/eval_multiset.sh
Our pretrained models can be accessed via the following links.
- NQ-single Google Drive
- Trivia-single Google Drive
- SQuAD-single Google Drive
- Multiset Google Drive
If you find this work useful, please cite the following paper:
@inproceedings{wu2022sentence,
title={Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval},
author={Wu, Bohong and Zhang, Zhuosheng and Wang, Jinyuan and Zhao, Hai},
booktitle={The 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)},
year={2022}
}