No description, website, or topics provided.
Clone or download
mkhodak Merge pull request #2 from fsonntag/Improvements
Various improvements to the alacarte script
Latest commit abe082f Oct 20, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
data-SemEval2013_Task12 rm SemEval Jul 21, 2018
data-SemEval2015_Task13 rm SemEval Jul 21, 2018
data-chimeras initial code commit May 17, 2018
data-nonces initial code commit May 17, 2018
targets transforms Jul 15, 2018
transform transforms: double->float Sep 28, 2018
LICENSE Initial commit May 8, 2018 Update Oct 1, 2018 Various improvements to the alacarte script Oct 19, 2018 initial code commit May 17, 2018 initial code commit May 17, 2018 Update Jul 15, 2018 initial code commit May 17, 2018 initial code commit May 17, 2018 initial code commit May 17, 2018


This repository contains code and transforms to induce your own rare-word/n-gram vectors as well as evaluation code for the A La Carte Embedding paper. An overview is provided in this blog post at OffConvex.

If you find any of this code useful please cite the following:

  title={A La Carte Embedding: Cheap but Effective Induction of Semantic Feature Vectors},
  author={Khodak, Mikhail and Saunshi, Nikunj and Liang, Yingyu and Ma, Tengyu and Stewart, Brandon and Arora, Sanjeev},
  booktitle={Proceedings of the ACL},

Inducing your own à la carte vectors

The following are steps to induce your own vectors for rare words or n-grams in the same semantic space as existing GloVe embeddings. For rare words from the IMDB, PTB-WSJ, SST, and STS tasks you can find vectors induced using Common Crawl / Gigaword+Wikipedia at

  1. Make a text file containing one word or space-delimited n-gram per line. These are the targets for which vectors are to be induced.
  2. Download source embedding files, which should have the format "word float ... float" on each line. Can find GloVe embeddings here. Choose the appropriate transform in the transform directory.
  3. If using Common Crawl, download a file of WET paths (e.g. here for the 2014 crawl). Run with this passed to the --paths argument. Otherwise pass (one or more) text files to the --corpus argument.


Required: numpy

Optional: h5py (check-pointing), nltk (n-grams), cld2-cffi (checking English), mpi4py (parallelizing using MPI), boto (Common Crawl)

For inducing vectors from Common Crawl on an AWS EC2 instance:

  1. Start an instance. Best to use a memory-optimized (r4.*) Linux instance.
  2. Download and execute
  3. Upload your list of target words to the instance and run

Evaluation code

Note that the code in this directory treats adding up all embeddings of context words in a corpus as a matrix operation. This is memory-intensive and more practical implementations should use simple vector addition to compute context vectors.

Dependencies: nltk, numpy, scipy, text_embedding

Optional: mpi4py (to parallelize coocurrence matrix construction)