-
Notifications
You must be signed in to change notification settings - Fork 630
Some words not in model GoogleNews-vectors-negative300.bin #18
Comments
I'm seeing the same thing. How was this resolved? |
This is normally expected as it is practically impossible to cover all the words for a given language during training. You have to decide how to handle the unknown words, some common approaches are:
|
Thanks, I had assumed that the ~1000 most common english words ("dog" is ranked 754 here) would inevitably be included in a 3,000,000 word vocabulary, but I don't know enough about how the vocabulary is selected from the input corpus. (Sorry that this is off topic for this repo) |
You're welcome. |
These google weights were trained on 100 billion (!) words and have a 3 million word vocab, so its still surprising to me that a word like "dog" did not make the cut. |
Well, now that you mentioned it again, it is indeed surprising that "dog" is not included in a 3 million word vocabulary, specially when the word "cat" it is included... |
Ok, if you check in the source code you can see that the maximum "vocabulary hash" size is 3 million (https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L27) but it seems that the vocabulary size is not covering all the hash table space, (there is a function called ReduceVocab to reduce the vocabulary to only the top most frequent words here: https://github.com/danielfrg/word2vec/blob/master/word2vec/c/word2vec.c#L175). You should check out the documentation of the already trained model because I think that the vocabulary size is one of the needed parameters at training time. |
Hello @sicotronic |
I don't know if we have the same problem, but I also noticed that common words were missing. Looking at
Edit: |
Hi @DucVuMinh I'm sorry for the lack of rigurosity. The idea behind using a random initialized vector, with values under the same distribution of the known words, for unknown words is that you will get a point in the vector space that looks like a real observed word and therefore you will be able to operate or apply all the distance calculations consistently with the known words, as well as retrain the embeddings to fit your data including a vector for your unknown words. I co-authored a paper at IJCAI2017 (https://www.ijcai.org/proceedings/2017/573) where we used a similar idea when assigning vectors to words we want to replace. (Basically we wanted to turn question sentences into something that looks like statements where the vectors representing the wh-question words (who/when/where) were replaced by the vectors of words that are most likely to make the sentence "similar" (under a given metric) to most of the answer sentences for each question type). Anyway, I think it is a somewhat-common trick to build vectors with random values (under the same distribution of the already known words) in order to initialize the vectors for unknown words and then make them fit your training dataset to represent the averaged distribution of the unknown words in your data. I'm not sure right now about exactly which other papers present this idea, but it is a technical tip shared mostly everywhere I can remember, I think if you google it you will find several results (answers at stackexchange.com, blogs, other repositories). I just did that and found this comment by dennybritz here: |
@sicotronic |
I'm seeing the same thing. How about that now? |
I am seeing the same issue as @liu-zg15. While it reports words like 'a', 'to', and 'and' are not in the vocabulary, it has vectors for 'b', 'c', etc. This seems like it must be some sort of bug instead of a lack of vocab coverage... (however it found vectors for both 'dog' and 'cat' unlike the earlier commenter). |
I am also facing the same issue could not find words like 'a', 'to' and 'of' but it appears corresponding words starting with uppercase 'A', 'To' and 'Of' are available. |
so-called 'stop-words' like articles, particles, prepositions are eliminated in most w2v models, as they take a lot of memory (usually half of all words), having no independant meaning, thus useless in this sence. |
I am able to get vectors for m['DOG'] and m['CAT'], when used in uppercase. It's weird that its only accepting Uppercase words in my model. I am using pretrained GoogleNews-negative300 |
When I use word2vec to access the pre-trained model GoogleNews-vectors-negative300.bin', some of the words are reported as being not in the model. I've had the same problem on a 16GB Mac running OS 10.10.2 and on a large linux machine. Here a session on linux:
The text was updated successfully, but these errors were encountered: