You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of transformers to account for the various types of question-answering domains:
question-answering exists in two forms: abstractive and extractive question answering.
we can keep a generic question-answering but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for datasets and adapt / extend the QA task template accordingly.
The text was updated successfully, but these errors were encountered:
As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of
transformers
to account for the various types of question-answering domains:Investigate which grouping of QA is best suited for
datasets
and adapt / extend the QA task template accordingly.The text was updated successfully, but these errors were encountered: