-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training with custom dataset #3
Comments
The pre-trained model comes from https://github.com/huggingface/transformers. |
Thanks for for providing this Kamal! For anyone else interested, I was able to get better performance with the following model which is provided by the transformer library directly (need to edit bert.py): tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad', config=config) |
Hi @gurvesh, What are EM and F1 Score of |
I think the scores are about the same. But in actual usage, I found the answers given by the finetuned-squad model on random articles I put through the models were much better. YMMV |
Hi Kamal,
Can you please share how to do finetuning with custom dataset
The text was updated successfully, but these errors were encountered: