-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about embedding matrix structure #7
Comments
Hi @tongjinle123 , you're right about fine tuning the unknown_word vectors. The paper suggests learning the unknown word vectors and we should do it. Problem: |
Could you please share a paper,artical or experiment which is talking about this problem with me? sorry for the bad comment format.... QAQ |
Refer to #13 for this issue. |
"all the out-of-vocabulary words are mapped to a token ,whose embedding is trainable with random initialization." which not in your code. (they used a pretrained matrix)That seems make sence.
Do that works for the model?
The text was updated successfully, but these errors were encountered: