-
Notifications
You must be signed in to change notification settings - Fork 812
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Randomly initialising word vectors #32
Comments
You'd do that in your model class's |
Thanks for the recommendation, a bit new to this. The initialised vectors would still go into TEXT.vocab.vectors? |
No, you don’t need to put them there -- the only place where your embeddings actually need to be is in your model; TEXT.vocab.vectors offers a way to get pretrained vectors corresponding to your vocabulary and then use those to initialize your model's embeddings. |
how do i find that option?? |
We have new Vector class in |
Is there an example for this somewhere? Ive been looking at the build_vocab docs and can't find any description of its arguments |
Can I do the following:
if I am after randomly-initialized embeddings ? |
you don't need the last line.
should set the vector randomly. |
There doesn't seem to be the option to initialise word vectors without using pretrained embeddings. There's an option to fill in vectors for tokens missing from the pretrained embeddings with normally distributed values. It would be cool if there was a built in option to initialise embeddings from a uniform distribution without having to specify a word embedding file.
The text was updated successfully, but these errors were encountered: