There doesn't seem to be the option to initialise word vectors without using pretrained embeddings. There's an option to fill in vectors for tokens missing from the pretrained embeddings with normally distributed values. It would be cool if there was a built in option to initialise embeddings from a uniform distribution without having to specify a word embedding file.
The text was updated successfully, but these errors were encountered:
No, you don’t need to put them there -- the only place where your embeddings actually need to be is in your model; TEXT.vocab.vectors offers a way to get pretrained vectors corresponding to your vocabulary and then use those to initialize your model's embeddings.