Skip to content

tonellotto/beir-sparta

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SPARTA

Re-Implementation of SPARTA: Efficient Open-Domain Question Answering via Sparse Transformer Matching Retrieval. It is the re-implementation we used for BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models.

Also have a look at our BEIR repository: https://github.com/UKPLab/beir

Note: Sorry, this is just research code, it is not in the best shape. It is sadly also not well documented.

Requirements

Training

See train_sparta_msmarco.py how to train it on the MSMARCO Passage Ranking dataset. Note, you find the needed training files there. Download them and put them in a data/ folder.

Evaluation

See eval_msmarco.py how to evaluate a SPARTA model on the MSMARCO Passage Ranking dataset.

Pretrained model

We provide a pre-trained model here: https://huggingface.co/BeIR/sparta-msmarco-distilbert-base-v1

Evaluation

See BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models how well our SPARTA implementation performs across several retrieval tasks.

About

Re-Implementation of SPARTA model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%