You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about your research approach communicated in Nature. You use there phrases "target word" and "context word". Normally, in the skip-gram model embedding for the "target word" (input layer) is different that the embedding for the "context word" (output layer). In gensim if you use model.wv.most_similar you are effectively searching for similar words using embeddings from the input layer. You can also access "context word" embeddings via model.syn1neg. Where you using both embeddings for analyzing e.g. relation between chemical compound and "thermoelectric"?
The text was updated successfully, but these errors were encountered:
Hi @dkajtoch thanks for the great question. The information is available in the caption of Figure 2b. We use "input embeddings" between the application word and context words, and a combination of input and output embeddings for the context words and materials. This pretty much translates to "which words that are similar to the application word is this material likely to be mentioned with". I hope this helps.
I have a question about your research approach communicated in Nature. You use there phrases "target word" and "context word". Normally, in the skip-gram model embedding for the "target word" (input layer) is different that the embedding for the "context word" (output layer). In gensim if you use
model.wv.most_similar
you are effectively searching for similar words using embeddings from the input layer. You can also access "context word" embeddings viamodel.syn1neg
. Where you using both embeddings for analyzing e.g. relation between chemical compound and "thermoelectric"?The text was updated successfully, but these errors were encountered: