New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BartForQuestionAnswering #4908
BartForQuestionAnswering #4908
Conversation
Codecov Report
@@ Coverage Diff @@
## master #4908 +/- ##
==========================================
+ Coverage 76.99% 77.02% +0.03%
==========================================
Files 128 128
Lines 21602 21635 +33
==========================================
+ Hits 16633 16665 +32
- Misses 4969 4970 +1
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for the contribution @patil-suraj ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! Very cool @patil-suraj.
Could you also add BartForQuestionAnswering
to the all_model_classes
in test_modeling_bart.py
?
Hi, @LysandreJik Now for some reason |
Awesome work @patil-suraj - I can help you with this test :-) |
I see what the problem is...it's actually not related to your PR at all. Can we you for now just remove |
Thank you @patrickvonplaten . I've removed it from |
This PR adds
BartForQuestionAnswering
.Decided to add this models as
BART
is intended for both NLU and NLG tasks and also achieves comparable performance toROBERTa
on SQuAD.Also fine-tuned the model here. The metrics are slightly worse than given in the paper. Got following metrics on SQuADv1
{'exact_match': 86.80227057710502, 'f1': 92.73424907872341}
@sshleifer , @patrickvonplaten