New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss function is not squared in glove_cython? #22
Comments
Bad variable name. I think this is the gradient of the loss function. Yes, this implementation does not generate separate vectors for context words. This makes it more memory efficient, as I can use an upper triangular matrix for the co-occurrence matrix (and only one matrix of vectors). |
Clear - thanks maciejkula. |
FYI @maciejkula : |
Interesting, I'll definitely have a look. Incidentally, I think my more recent project (https://github.com/lyst/lightfm) should work really well on word embeddings (it uses a fancy learning-to-rank approach), I need to try it out. |
Not sure if I am missing something here but thought I'd ask for clarification - the loss function is not squared.
Also this implementation does not generate seperate vectors for when word is used in context?
The text was updated successfully, but these errors were encountered: