Skip to content

harvardnlp/MemN2N

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

End-To-End Memory Network

Torch implementation of MemN2N (Sukhbaatar, 2015). Supports Adjacent Weight Tying, Position Encoding, Temporal Encoding and Linear Start. Code uses v1.0 of bAbI dataset with 1k questions per task.

Prerequisites:

  • Python 2.7
  • Torch (with nngraph)

Preprocessing

First, preprocess included data into hdf5 format:

python preprocess.py

This will create a hdf5 file for each task (total 20 tasks).

To train:

th train.lua

This will train on task 16 with Linear Start over 100 epochs by default. See train.lua (or the paper) for hyperparameters and more training options. To train on gpu, use -cuda.

About

Torch implementation of End-to-End Memory Networks (https://arxiv.org/abs/1503.08895)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published