Neural network definition models
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
data
eval
model
preprocess
util
.gitignore
README.md
contributors.txt
forward_dict.py
gen.fish
log.lua
rerank.fish
rerank.py
rerank2.fish
rerank2.py
score.fish
score.lua
test.lua
train.lua

README.md

Dictionary Definition Models

A recurrent neural network that learns to define words from dictionaries. This repository is for AAAI2017 paper: "Definition Modeling: Learning to define word embeddings in natural language" along with the preporcessing scripts.

Dependencies

CUDA Libraries

Skip this if you do not have a GPU.

Torch Libraries

Most of the libraries will come with Torch if you install from their installation script. You can use luarocks to install additional packages. For examples, luarocks install dp. The additional packages are:

If you are planing to use GPU (CUDA), you will need the following packages:

  • cutorch
  • cunn
  • cudnn (make sure that you get the right branch for your cuDNN version)

To install from source, go to the source code directory and run luarocks install.

Python Libraries

  • numpy
  • KenLM (installation: pip install https://github.com/kpu/kenlm/archive/master.zip)

Word Embedding

You will also need a set of word embeddings in torch binary format of an object:

{
  M, -- 2D tensor where each row is an embedding
  v2wvocab, -- index-to-word map
  w2vvocab -- word-to-index map
}

You can download embeddings from Word2Vec and use word2vec.torch to convert them into torch binary file.

Usage

In most of the scripts, there will be a help message which can be accessed by

th script.lua --help

Preparing data

  • First you need to convert text data into torch binary files by using preprocess/prep_definition.lua. This will create multiple torch binary files in the data directory
  • Then sub-select word embeddings using preprocess/prep_w2v.lua. This will align vocab and only save a set of embeddings we need)

We include our dataset (data/commondefs). If you want to use other dataset, please check the file format. For dictionary parsing scripts, check out dict-definition (only support WordNet and GCIDE for now).

Main scripts

  • train.lua is a script for training a model
  • test.lua is a script that uses a model to compute perplexity, generate definitions, and rank words (reverse dictionary).

Please see the option within the help message of the scripts.

To-do

  • Add detail usage and examples to README
  • Refactor ranking scripts