Skip to content

Python code for training Paragram word embeddings. These achieve human-level performance on some word similiarty tasks including SimLex-999.This code was used to obtain results in the appendix of our 2015 TACL paper "From Paraphrase Database to Compositional Paraphrase Model and Back".

Notifications You must be signed in to change notification settings

jwieting/paragram-word

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

paragram-word

Code to train Paragram word embeddings from the appendix of "From Paraphrase Database to Compositional Paraphrase Model and Back".

The code is written in python and requires numpy, scipy, theano and the lasagne library.

To get started, run setup.sh which will download the required files. Then run demo.sh to start training a model. Check main/train.py for command line options.

If you use our code for your work please cite:

@article{wieting2015ppdb, title={From Paraphrase Database to Compositional Paraphrase Model and Back}, author={John Wieting and Mohit Bansal and Kevin Gimpel and Karen Livescu and Dan Roth}, journal={Transactions of the ACL (TACL)}, year={2015}}

About

Python code for training Paragram word embeddings. These achieve human-level performance on some word similiarty tasks including SimLex-999.This code was used to obtain results in the appendix of our 2015 TACL paper "From Paraphrase Database to Compositional Paraphrase Model and Back".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published