A decomposable attention model for Natural Language Inference
This directory contains an implementation of the entailment prediction model described by Parikh et al. (2016). The model is notable for its competitive performance with very few parameters.
The model is implemented using Keras and spaCy.
Keras is used to build and train the network. spaCy is used to load
the GloVe vectors, perform the
feature extraction, and help you apply the model at run-time. The following
demo code shows how the entailment model can be used at runtime, once the
hook is installed to customise the
.similarity() method of spaCy's
def demo(shape): nlp = spacy.load('en_vectors_web_lg') nlp.add_pipe(KerasSimilarityShim.load(nlp.path / 'similarity', nlp, shape)) doc1 = nlp(u'The king of France is bald.') doc2 = nlp(u'France has no king.') print("Sentence 1:", doc1) print("Sentence 2:", doc2) entailment_type, confidence = doc1.similarity(doc2) print("Entailment type:", entailment_type, "(Confidence:", confidence, ")")
Which gives the output
Entailment type: contradiction (Confidence: 0.60604566), showing that
the system has definite opinions about Betrand Russell's famous conundrum!
I'm working on a blog post to explain Parikh et al.'s model in more detail. A notebook is available that briefly explains this implementation. I think it is a very interesting example of the attention mechanism, which I didn't understand very well before working through this paper. There are lots of ways to extend the model.
||The script that will be executed. Defines the CLI, the data reading, etc — all the boring stuff.|
||Provides a class
||Defines the neural network model.|
pip install keras pip install spacy python -m spacy download en_vectors_web_lg
You'll also want to get Keras working on your GPU, and you will need a backend, such as TensorFlow or Theano. This will depend on your set up, so you're mostly on your own for this step. If you're using AWS, try the NVidia AMI. It made things pretty easy.
Once you've installed the dependencies, you can run a small preliminary test of the Keras model:
This compiles the model and fits it with some dummy data. You should see that both tests passed.
Finally, download the Stanford Natural Language Inference corpus.
Running the example
You can run the
keras_parikh_entailment/ directory as a script, which executes the file
keras_parikh_entailment/__main__.py. If you run the script without arguments
the usage is shown. Running it with
-h explains the command line arguments.
The first thing you'll want to do is train the model:
python keras_parikh_entailment/ train -t <path to SNLI train JSON> -s <path to SNLI dev JSON>
Training takes about 300 epochs for full accuracy, and I haven't rerun the full experiment since refactoring things to publish this example — please let me know if I've broken something. You should get to at least 85% on the development data even after 10-15 epochs.
The other two modes demonstrate run-time usage. I never like relying on the accuracy printed
.fit() methods. I never really feel confident until I've run a new process that loads
the model and starts making predictions, without access to the gold labels. I've therefore
python keras_parikh_entailment/ evaluate -s <path to SNLI train JSON>
Finally, there's also a little demo, which mostly exists to show you how run-time usage will eventually look.
python keras_parikh_entailment/ demo