You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi
Firstly, what you've done looks great - thank you. Secondly I'm interested in a feature that you hint at in your 1st blog post: 2.We make use of recent work on word embeddings to compute embeddings for unknown words on the fly from definitions or information that you can provide (it’s very simple in fact: you can compute a word embedding for “Kendall Jenner” simply by averaging the vectors for “woman” and “model” for example).
I'd like to know how to supplement the definitions/information you refer to above. I'm working within a domain that has some very specific/unique terms that will not be in the training corpus and I'd like to know how to use/supplement the code to embed vectors in the way you describe above,
Thanks
The text was updated successfully, but these errors were encountered:
Couldn't easily integrate this features in the new release which is tightly integrated with spaCy pipeline but I hope we cant get it in for the next release. Will keep you tuned.
Hi
Firstly, what you've done looks great - thank you. Secondly I'm interested in a feature that you hint at in your 1st blog post:
2.We make use of recent work on word embeddings to compute embeddings for unknown words on the fly from definitions or information that you can provide (it’s very simple in fact: you can compute a word embedding for “Kendall Jenner” simply by averaging the vectors for “woman” and “model” for example).
I'd like to know how to supplement the definitions/information you refer to above. I'm working within a domain that has some very specific/unique terms that will not be in the training corpus and I'd like to know how to use/supplement the code to embed vectors in the way you describe above,
Thanks
The text was updated successfully, but these errors were encountered: