Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about dataset #4

Closed
leejiwon1125 opened this issue Aug 10, 2023 · 4 comments
Closed

Question about dataset #4

leejiwon1125 opened this issue Aug 10, 2023 · 4 comments

Comments

@leejiwon1125
Copy link

For testing End-to-end performance on HotpotQA, I downloaded reader model from
"https://msrdeeplearning.blob.core.windows.net/udq-qa/COS/models/hotpot_reader_checkpoint_best.pt" and reader data from
"https://msrdeeplearning.blob.core.windows.net/udq-qa/COS/results/HotpotQA/hotpot_dev_reader_2hops.json".

I can get the same reults as in Table A3, in paper. But I have a question in reader data.
What corpus did you use for evaluating HotpotQA? To be specific, what is the source of retrieved document mentioned as value of key "pos titles" and "title" in reader data? I tried to find it but I can find only details about corpus of pretraining.

Plus, I wonder the date of dump of "https://msrdeeplearning.blob.core.windows.net/udq-qa/COS/data/HotpotQA/hotpot_corpus.jsonl" file.

@Mayer123
Copy link
Owner

Thank you for your interest in our work!

For reader data, we used the official HotpotQA Wikipedia corpus, which contains all of the first paragraphs of a 2017 Wikipedia dump. You can also download the corpus from HotpotQA official website https://nlp.stanford.edu/projects/hotpotqa/enwiki-20171001-pages-meta-current-withlinks-abstracts.tar.bz2 or other previous works repo, e.g. https://github.com/facebookresearch/multihop_dense_retrieval. With some data reformatting, you should be able to get the same content as in hotpot_corpus.jsonl.

@leejiwon1125
Copy link
Author

I really appreciate for your response. I have two more question.

In the evaluation of FiE_reader using the provided hotpot_dev_reader_2hops.json, an EM score of 68.2 was obtained, as mentioned in the paper. However, when I conducted the evaluation from start to finish, I obtained an EM score of 61.9 which has big gap with 68.2.

The model I used is cos_nq_ott_hotpot_finetuned_6_experts.ckpt, and the data consists of hotpot_wiki_linker* at the span stage, hotpot_wiki_retriever* at the linking stage, and hotpot_corpus.jsonl at the cos stage (In cos stage, single retrieve, rerank, expanded retrieve, link from hop1 passage, and rerank was executed). After the cos stage, I fed top1 path into the reader. During the execution of train_qa_hotpot.py, hotpot_reader_checkpoint_best.pt was used both of the case.

I’m wondering if I ask you about the possible reasons for these discrepant results.(I am curious whether a different model other than cos_nq_ott_hotpot_finetuned_6_experts.ckpt was used to create hotpot_dev_reader_2hops.json.)

Also, I am curious about the difference between hotpot_wiki_linker* and hotpot_wiki_retriever*, both of which are computed hotpot corpus embeddings.
Thank you.

@Mayer123
Copy link
Owner

Just to clarify, "The model I used is cos_nq_ott_hotpot_finetuned_6_experts.ckpt" this is correct, we do not need any embeddings at span stage, and we use "hotpot_wiki_linker*" at linking stage.

After the cos stage, we actually run the https://msrdeeplearning.blob.core.windows.net/udq-qa/COS/models/hotpot_path_reranker_checkpoint_best.pt to get a top1 path from top100 cos results, and then that top1 is fed to hotpot_reader_checkpoint_best.pt. (This is also explained in Appendix B and Appendix C)

hotpot_wiki_linker* are computed with expert id 3 and hotpot_wiki_retriever* are computed with expert id 1, as shown in our Table 7, running inference with different experts actually leads to quite different results. So it's important to use the correct experts/embeddings at every stage.

@leejiwon1125
Copy link
Author

Thank you for your kind assistance. My curiosity has been resolved thanks to you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants