You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about how you evaluated GPT-2 on the CoQA dataset.
We are struggling to reproduce the results reported in the paper (55 F1). We evaluated gpt2-xl from HuggingFace on CoQA and got an F1 of 28.7.
We used the official dev set and evaluation script, which we downloaded from here. Although we get good answers, these answers get a lower score due to the way the original CoQA benchmark evaluator is set up. Did you evaluate it differently?
The text was updated successfully, but these errors were encountered:
I have a question about how you evaluated GPT-2 on the CoQA dataset.
We are struggling to reproduce the results reported in the paper (55 F1). We evaluated gpt2-xl from HuggingFace on CoQA and got an F1 of 28.7.
We used the official dev set and evaluation script, which we downloaded from here. Although we get good answers, these answers get a lower score due to the way the original CoQA benchmark evaluator is set up. Did you evaluate it differently?
The text was updated successfully, but these errors were encountered: