Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse code

README commit

  • Loading branch information...
commit 8e1a517f26fb7da691837b9e24343975733a2096 0 parents
Andrew Maas authored

Showing 1 changed file with 57 additions and 0 deletions. Show diff stats Hide diff stats

  1. +57 0 README
57 README
... ... @@ -0,0 +1,57 @@
  1 +This code runs the log-bilinear document model (lblDm) described
  2 +in:
  3 +Andrew L. Maas and Andrew Y. Ng. (2010).
  4 +A Probabilistic Model for Semantic Word Vectors.
  5 +NIPS 2010 Workshop on Deep Learning and Unsupervised Feature Learning.
  6 +
  7 +
  8 +How to run the code::
  9 +open matlab
  10 +if you can, open a matlabpool as it will help the code run faster.
  11 +In matlab on my multi-core system I do this with:
  12 +matlabpool open local;
  13 +
  14 +run the learning procedure for the model:
  15 +run_lblDm;
  16 +
  17 +run the visualization to see how learned representations cluster:
  18 +run_tsne;
  19 +
  20 +
  21 +Details::
  22 +The demo uses data from flickr tags. Each document is the set of tags
  23 +associated with an image in flickr. The dataset is a derivative of
  24 +the NUS-WIDE dataset:
  25 +http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm
  26 +
  27 +The demo is currently set to learn word vectors for 1k words using
  28 +100k documents. The top 1k words are used after ignoring the 50
  29 +most frequently occuring words (common stop word removal). This is
  30 +a fairly small demo, but it runs quickly.
  31 +
  32 +The code uses alternating optimization, where both subproblems are
  33 +optimized with the minFunc package. First, the MAP estimates for the
  34 +document coefficients theta are updated. This requires solving number
  35 +of documents small optimization problems. The code is set to use a
  36 +parfor loop, so if you have a matlabpool open it will likely run much
  37 +faster on a multi-core machine. In the second phase, the word
  38 +representations are updated which is a large, single optimization
  39 +problem. This alternating procedure contunues until the maximum number
  40 +of algorithm iterations is reached. For this demo, the model converges
  41 +much sooner than the maximum number of outer iterations. You can see
  42 +this happening when the word representation optimization stops within
  43 +a few iterations.
  44 +
  45 +The visualization step clusters the learned word representations using
  46 +the t-sne algorithm. A 2-D plot is then displayed and gives some sense
  47 +of how the word representations define similarity among words.
  48 +
  49 +References for supporting code:
  50 +MinFunc:
  51 +http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html
  52 +
  53 +t-SNE:
  54 +http://homepage.tudelft.nl/19j49/t-SNE.html
  55 +L.J.P. van der Maaten and G.E. Hinton.
  56 +Visualizing High-Dimensional Data Using t-SNE.
  57 +Journal of Machine Learning Research 9(Nov):2579-2605, 2008.

0 comments on commit 8e1a517

Please sign in to comment.
Something went wrong with that request. Please try again.