Neural Constituency Parser Analysis
This repository contains code necessary to reproduce experiments in What's Going On in Neural Constituency Parsers? An Analysis from NAACL 2018.
If you are looking for a parser implementation and not the analysis, we recommend you instead use the code from Mitchell's repository, which also includes the model improvements described in the paper.
Requirements and Setup
- Python 3.5 or higher.
- DyNet. We recommend installing DyNet from source with MKL support for significantly faster run time.
- EVALB. Before starting, run
EVALB/directory to compile an
evalbexecutable. This will be called from Python for evaluation.
Command Line Arguments
The base model can be trained with the command:
python3 src/main.py train --parser-type chart --model-path-base models/base-model
The dev score will be appended to the model file name in the form
_dev=xx.xx, where each
x is replaced with a digit, so this will need to be specified when running the program with an already trained model as is done for some experiments.
The following table describes the command line arguments to run each experiment in the paper:
|3.2||Use the base model command with
|4.1||Add the option
To run on the test set, use
python3 src/main.py test --model-path-base models/base-model_dev=xx.xx