GloVe model for distributed word representation
Switch branches/tags
Nothing to show
Clone or download
manning Enhance token reading.
Previously, if a  token was too long, then when it was truncated, a UTF-8 encoded unicode character could be split,
resulting in invalid UTF-8 strings in the vocabulary. Now, truncation only occurs at what would be the end of a
valid UTF-8 encoded character. (But this won't do much harm if the encoding is different.)
Also, token reading is the same between vocab_count and cooccur, whereas before inconsistencies could occur.
Morever, if there was no end of line at eof, cooccurrences with the last word used to be lost. Now they aren't.
Latest commit 07d59d5 Jun 25, 2018

GloVe: Global Vectors for Word Representation

nearest neighbors of
Litoria Leptodactylidae Rana Eleutherodactylus
Comparisons man -> woman city -> zip comparative -> superlative
GloVe Geometry

We provide an implementation of the GloVe model for learning word representations, and describe how to download web-dataset vectors or train your own. See the project page or the paper for more information on glove vectors.

Download pre-trained word vectors

The links below contain word vectors obtained from the respective corpora. If you want word vectors trained on massive web datasets, you need only download one of these text files! Pre-trained word vectors are made available under the Public Domain Dedication and License.

  • Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors, 1.75 GB download):
  • Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download):
  • Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, uncased, 300d vectors, 822 MB download):
  • Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 200d vectors, 1.42 GB download):

Train word vectors on a new corpus

If the web datasets above don't match the semantics of your end use case, you can train word vectors on your own corpus.

$ git clone
$ cd glove && make
$ ./

The script downloads a small corpus, consisting of the first 100M characters of Wikipedia. It collects unigram counts, constructs and shuffles cooccurrence data, and trains a simple version of the GloVe model. It also runs a word analogy evaluation script in python to verify word vector quality. More details about training on your own corpus can be found by reading or the src/


All work contained in this package is licensed under the Apache License, Version 2.0. See the include LICENSE file.