Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about target and context words #13

Closed
dkajtoch opened this issue Sep 7, 2019 · 2 comments
Closed

Question about target and context words #13

dkajtoch opened this issue Sep 7, 2019 · 2 comments

Comments

@dkajtoch
Copy link

dkajtoch commented Sep 7, 2019

I have a question about your research approach communicated in Nature. You use there phrases "target word" and "context word". Normally, in the skip-gram model embedding for the "target word" (input layer) is different that the embedding for the "context word" (output layer). In gensim if you use model.wv.most_similar you are effectively searching for similar words using embeddings from the input layer. You can also access "context word" embeddings via model.syn1neg. Where you using both embeddings for analyzing e.g. relation between chemical compound and "thermoelectric"?

@vtshitoyan
Copy link
Collaborator

Hi @dkajtoch thanks for the great question. The information is available in the caption of Figure 2b. We use "input embeddings" between the application word and context words, and a combination of input and output embeddings for the context words and materials. This pretty much translates to "which words that are similar to the application word is this material likely to be mentioned with". I hope this helps.

@jdagdelen
Copy link
Contributor

Closing this issue since the discussion seems to have been resolved, but please feel free to reopen if you want to continue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants