fine-tune GPT2 for Question-Answering task #75
AseelAlshorafa
started this conversation in
Ideas
Replies: 1 comment 1 reply
-
Hey, Or if you want to do a Squad like fine-tuning (which I don't think will work as good as AraBERT), then take a look at this repository https://github.com/ftarlaci/GPT2sQA |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
First, I just want to thank you for your effort.
I saw the examples and you used Zero-Shot Question Answering I just want to ask if there is any way that I can fine-tune the model for question-answer datasets like ARCD,
Beta Was this translation helpful? Give feedback.
All reactions