- create the following directories: raw, raw/dataset, raw/embeddings
- download the GloVe embeddings (http://nlp.stanford.edu/data/glove.840B.300d.zip), unzip it and put the file [glove.840B.300d.txt] inside the raw/embeddings directory
- download the SNLI corpus (http://nlp.stanford.edu/projects/snli/snli_1.0.zip), unzip it and put the files [snli_1.0_train.jsonl, snli_1.0_dev.jsonl, snli_1.0_test.jsonl] inside the raw/dataset directory
- run "python preprocess_data.py"
- for the centroids model:
- run once "python generate_centroids.py" which can be found inside models/centroids/
- run "python centroids_model.py" which can be found inside models/centroids/
- the results will be created inside models/centroids/results
- for the GRU model:
- run "python gru.py" which can be found inside models/gru/
- the results will be created inside models/gru/results
- for the attention model:
- run "python attention_model.py" which can be found inside models/attention/
- the results will be created inside models/attention/results