-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create my own Model for Sentence Similarity/Automated Scoring #72
Comments
hi @dhimasyoga16 thank you for your interest in this repo. I am not sure what the question is asking. Are you asking for how to fine-tune the If you have already fine-tuned Feel free to follow up with more questions. |
Hi, thankyou so much for the quick assist and the answer. Can i ask one more question? Thankyou so much once again :) |
hi @dhimasyoga16 which if you have precomputed the features, you can modify the code (https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L253) to load the features instead of computing them again. |
Hi, sorry for the inactivity of this issue. How can i fine-tune the bert multilingual model in Indonesian language? Can i use a wikipedia dump file as a corpus? |
hi @dhimasyoga16 this is a question that is better posed to the huggingface repo. Hopefully they will have a detailed instruction. We are not really experts on this topic. |
Hi, i've successfully created my language model using huggingface transformers. When i'm trying to do a testing (using my own model of course), why does the Sorry for so much questions, i'm new to NLP. |
hi @dhimasyoga16 please see our paper for the effect of using different |
I have a wikipedia dump file as my corpus (which is in Indonesian, i've extract it and convert it to .txt)
How can i train my corpus file with
bert multilingual cased
(fine tune) with BERTScore so i can have my own model for specific task such as sentence similarity or automated short answer scoring?Or maybe i should do this with the original BERT?
Thankyou so much in advance.
The text was updated successfully, but these errors were encountered: