Code for our NAACL 2019 paper:
Paper link: http://arxiv.org/abs/1903.02591
Model Overview:
PyTorch 0.4.1
tensorboardX
tqdm
gluonnlp
First prepare the dataset and embeddings
- download data from http://nlp.cs.washington.edu/entity_type/data/ultrafine_acl18.tar.gz, unzip if and put it under
data/
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -model_debug -enhanced_mention -data_setup joint -add_crowd -multitask -gcn
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -model_debug -enhanced_mention -data_setup joint -add_crowd -multitask -gcn -load -mode test -eval_data crowd/test.json
a) w/o gcn
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -model_debug -enhanced_mention -data_setup joint -add_crowd -multitask
b) w/o enhanced mention-context interaction
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -gcn -enhanced_mention -data_setup joint -add_crowd -multitask
Training
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -enhanced_mention -goal onto -gcn
Testing
CUDA_VISIBLE_DEVICES=1 python main.py $RUN_ID$ -lstm_type single -enhanced_mention -goal onto -gcn -mode test -load -eval_data ontonotes/g_dev.json
The meaning of the arguments can be found in config_parser.py
We thank Choi et al for the release of the Ultra-Fine dataset and the basic model: https://github.com/uwnlp/open_type.