Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion around BERTserini paper #31

Closed
fmikaelian opened this issue Feb 16, 2019 · 4 comments
Closed

Discussion around BERTserini paper #31

fmikaelian opened this issue Feb 16, 2019 · 4 comments

Comments

@fmikaelian
Copy link
Collaborator

See: https://export.arxiv.org/pdf/1902.01718

@fmikaelian
Copy link
Collaborator Author

fmikaelian commented Feb 17, 2019

Here are my takeways:

  • They use Anserini as their document retriever, based on open source Lucene. It uses the BM25 ranking function.
  • There are 2 types of retrievers: single-stage vs. multi-stage.
  • Article retrieval underperforms paragraph retrieval by a large margin.
  • To score predictions they use a weighted linear interpolation between the BERT and Anserini scores: S = (1 - μ) * S_anserini + μ * S_bert with μ=0.5
  • In production mode they retrieve k=10 paragraphs.
  • Answering a question with this setup takes 2,35 seconds in average with a Tesla P40 GPU.

They modified BERT to compare predictions in a meaningful way (see #36):

to allow comparison and aggregation of results from different segments, we remove the final softmaxlayer over different answer spans.

But this is to be clarified.

@andrelmfarias
Copy link
Collaborator

What I understand from the quote below is that they only use the logits (score) for comparison between answers spans, instead of using the probabilities after applying the softmax function.

to allow comparison and aggregation of results from different segments, we remove the final softmaxlayer over different answer spans.

@fmikaelian what do you think?

@fmikaelian
Copy link
Collaborator Author

Yes

It would be useful to cross check with the paper of danqi chen where she mentions something about it with their DrQA app
https://cs.stanford.edu/~danqi/papers/thesis.pdf

Also we should follow this thread: huggingface/transformers#360

@fmikaelian
Copy link
Collaborator Author

In section 5.2.3 of Danqi Chen's thesis:

We apply our trained DOCUMENT READER for each single paragraph that appears inthe top 5 Wikipedia articles and it predicts an answer span with a confidence score. To make scores compatible across paragraphs in one or several retrieved documents, we use the unnormalized exponential and take argmax over all considered paragraph spans for our final prediction. This is just a very simple heuristic and there are better ways to aggregate evidence over different paragraphs

This part seems to be implemented here:

https://github.com/facebookresearch/DrQA/blob/d27180fc527084263ca0e43091f5d35c4bbd4963/drqa/reader/layers.py#L243

And here:

https://github.com/facebookresearch/DrQA/blob/1f811ded549a69f8b5ea303fb6f6d35ad6fc84ae/drqa/pipeline/drqa.py#L113

Our predict() function does not currently returns this confidence score. How can we get it in our setup and modify it for comparision?

@fmikaelian fmikaelian moved this from To Review 📚 to Under Review 🔍 in Kanban Board Mar 11, 2019
@fmikaelian fmikaelian moved this from Under Review 🔍 to Done ✅ in Kanban Board Mar 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment