You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Vocab size" is way off. See the attached screen shot: 1.4 billion words of PubMedCentral author manuscripts, and the vocabulary size is 1,307, according to the status message output. Doesn't seem likely.
I'm using whatever version of wordVectors was on GitHub as of mid-January 2015. Not sure what version of RStudio--I think those 1.4 billion words of text have choked my laptop to death... OS X.
The text was updated successfully, but these errors were encountered:
An additional data point regarding the small vocabulary size that's showing up on the status message: I'm trying to run train_word2vec() on a data set that's about 1/10 the size of the data set that I was using yesterday, and it's showing the same vocabulary size--see the attached screen shot...
Huh. How are you normalizing the text? I can untar a few pubmed abstracts and run cat */*.txt | perl -pe 's/[^A-Za-z \n]/ /g;' > all.txt to get something that gives 40,195 vocab in 5.8 million words.
What's the output of system("head -20 YOURFILENAME | cut -c 1-80")? Does it look like real text? Or another check is; what are the first twenty rownames() of the trained object model?
Try rerunning install_github("bmschmidt/wordVectors"); maybe the update last week fixed it.
"Vocab size" is way off. See the attached screen shot: 1.4 billion words of PubMedCentral author manuscripts, and the vocabulary size is 1,307, according to the status message output. Doesn't seem likely.
I'm using whatever version of wordVectors was on GitHub as of mid-January 2015. Not sure what version of RStudio--I think those 1.4 billion words of text have choked my laptop to death... OS X.
The text was updated successfully, but these errors were encountered: