Skip to content

tpimentelms/GloVe

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GloVe: Global Vectors for Word Representation

Repository forked from Stanford's one. Changes were made to make it simpler to train GloVe embeddings on Wikipedia data.

Getting the embeddings

First, make sure you have both gensim, spacy and tqdm installed in your python environment.

$ conda env create -f environment.yml

Then, install the spcacy parser:

$ python -m spacy download xx_ent_wiki_sm

The file MakeWiki and src/tokenizer.py were added to simplify the embedding training process. To create glove embeddings for a language use the command:

$ make -f MakeWiki LANGUAGE=<language>

In this command, <language> can be any language in Wikipedia for which spacy's tokenizer works. For example, training Portuguese embeddings is done by running:

$ make -f MakeWiki LANGUAGE=pt

This command will download the latest wikipedia data for that language, tokenize it and then train GloVe embeddings on it. Vectors will be saved in path results/<language>/vectors.bin.

Original README:

nearest neighbors of
frog
Litoria Leptodactylidae Rana Eleutherodactylus
Pictures
Comparisons man -> woman city -> zip comparative -> superlative
GloVe Geometry

We provide an implementation of the GloVe model for learning word representations, and describe how to download web-dataset vectors or train your own. See the project page or the paper for more information on glove vectors.

Download pre-trained word vectors

The links below contain word vectors obtained from the respective corpora. If you want word vectors trained on massive web datasets, you need only download one of these text files! Pre-trained word vectors are made available under the Public Domain Dedication and License.

  • Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors, 1.75 GB download): glove.42B.300d.zip
  • Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip
  • Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, uncased, 300d vectors, 822 MB download): glove.6B.zip
  • Twitter (2B tweets, 27B tokens, 1.2M vocab, uncased, 200d vectors, 1.42 GB download): glove.twitter.27B.zip

Train word vectors on a new corpus

If the web datasets above don't match the semantics of your end use case, you can train word vectors on your own corpus.

$ git clone http://github.com/stanfordnlp/glove
$ cd glove && make
$ ./demo.sh

The demo.sh script downloads a small corpus, consisting of the first 100M characters of Wikipedia. It collects unigram counts, constructs and shuffles cooccurrence data, and trains a simple version of the GloVe model. It also runs a word analogy evaluation script in python to verify word vector quality. More details about training on your own corpus can be found by reading demo.sh or the src/README.md

License

All work contained in this package is licensed under the Apache License, Version 2.0. See the include LICENSE file.

About

GloVe model for distributed word representation

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 70.3%
  • Python 15.8%
  • MATLAB 10.3%
  • Shell 2.7%
  • Makefile 0.9%