Skip to content
The Tensorflow code for this ACL 2018 paper: "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms"
Python Jupyter Notebook C++ Makefile
Branch: master
Clone or download
Dinghan Dinghan
Dinghan and Dinghan add_snli-emb
Latest commit 487f8bc Jun 29, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
intrinsic_dimension rm resman May 26, 2018
log First commit May 24, 2018
pycocoevalcap init May 25, 2018
save First commit May 24, 2018
.DS_Store init May 25, 2018
README.md add_snli-emb Jun 29, 2018
data_utils.py First commit May 24, 2018
eval_dbpedia_emb.py add_snli-emb Jun 29, 2018
eval_snli_emb.py add_snli-emb Jun 29, 2018
eval_yahoo_emb.py add_snli-emb Jun 29, 2018
model.py edit model May 28, 2018
requirements.txt First commit May 24, 2018
utils.py init May 25, 2018

README.md

SWEM (Simple Word-Embedding-based Models)

This repository contains source code necessary to reproduce the results presented in the following paper:

This project is maintained by Dinghan Shen. Feel free to contact dinghan.shen@duke.edu for any relevant issues.

Prerequisite:

  • CUDA, cudnn
  • Python 2.7
  • Tensorflow (version >1.0). We used tensorflow 1.5.
  • Run: pip install -r requirements.txt to install requirements

Data:

  • For convenience, we provide pre-processed versions for the following datasets: DBpedia, SNLI, Yahoo. Data are prepared in pickle format, and each .p file has the same fields in the same order:

    • train_text, val_text, test_text, train_label, val_label, test_label, dictionary(wordtoix), reverse dictionary(ixtoword)
  • These .p files can be downloaded from the links below. After downloading, you can put them into a data folder:

Run

  • Run: python eval_dbpedia_emb.py for ontology classification on the DBpedia dataset

  • Run: python eval_snli_emb.py for natural language inference on the SNLI dataset

  • Run: python eval_yahoo_emb.py for topic categorization on the Yahoo! Answer dataset

  • Options: options can be made by changing option class in any of the above three files:

  • opt.emb_size: number of word embedding dimensions.
  • opt.drop_rate: the keep rate of dropout layer.
  • opt.lr: learning rate.
  • opt.batch_size: number of batch size.
  • opt.H_dis: the dimension of last hidden layer.
  • On a K80 GPU machine, training roughly takes about 3 minutes each epoch and 5 epochs for Debpedia to converge, 50 seconds each epoch and 20 epochs for SNLI, and 4 minutes each epoch and 5 epochs for the Yahoo dataset.

Subspace Training & Intrinsic Dimension

To measure the intrinsic dimension of word-embedding-based text classification tasks, we compare SWEM and CNNs via subspace training in Section 5.1 of the paper.

Please follow the instructions in folder intrinsic_dimension to reproduce the results.

Citation

Please cite our ACL paper in your publications if it helps your research:

@inproceedings{Shen2018Baseline, 
title={Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms}, 
author={Shen, Dinghan and Wang, Guoyin and Wang, Wenlin and Renqiang Min, Martin and Su, Qinliang and Zhang, Yizhe and Li, Chunyuan and Henao, Ricardo and Carin, Lawrence}, 
booktitle={ACL}, 
year={2018} 
}
You can’t perform that action at this time.