-
Notifications
You must be signed in to change notification settings - Fork 25.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Extract Predictions from Trainer #5547
Comments
Unless I'm mistaken, Question-answering does support Trainer. Is a specific feature missing for your use case? |
Hi @julien-c! At the time of writing the PR was not merged yet (and I was not aware of it, my bad). It is great! In any case, the previous example (the one without trainer) outputted both The new example with trainer looses this ability (because the new dataset/trainer API does not support it). Adding this feature could benefit not only SQuAD, but other datasets too (i.e.: RACE). What do you think on this regard? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Any updates on this matter? I recently started using |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
馃殌 Feature request
Add the possibility to return predictions along with its example id in the new Trainer class.
Motivation
When working with extractive QA (i.e.: SQuAD), you get back the best predictions, but, the current example for running squad uses the old, plain training/eval script, without the new Trainer class.
Additionally, there are other tasks where predictions can be extremely useful (i.e.: Multiple Choice).
Adding such functionality in the Trainer class could solve this and unify both question answering and multiple choice examples.
Your contribution
I am familiarized with the code (both the Trainer class and the old train/eval script), so I could submit a PR with the new functionality.
The text was updated successfully, but these errors were encountered: