Skip to content

Latest commit

 

History

History
 
 

examples

ParlAI examples

This directory contains a few particular examples of basic loops.

  • base_train.py: very simple example shows the outline of a training/validation loop using the default Agent parent class
  • display_data.py: uses agent.repeat_label to display data from a particular task provided on the command-line
  • display_model.py: shows the predictions of a provided model on a particular task provided on the command-line
  • eval_model.py: uses the named agent to compute evaluation metrics data for a particular task provided on the command-line
  • build_dict.py: build a dictionary from a particular task provided on the command-line using core.dict.DictionaryAgent

Running These Examples

Most of them can be run simply by typing python {example}.py -t {task_name}. Here are some examples:

Display 10 random examples from task 1 of the "1k training examples" bAbI task:

python display_data.py -t babi:task1k:1

Run a train/valid loop with the basic agent (which prints what it receives and then says hello to the teacher, rather than learning anything) on the babi task:

python base_train.py -t babi:task1k:1

Displays 100 random examples from multi-tasking on the bAbI task and the SQuAD dataset at the same time:

python display_data.py -t babi:task1k:1,squad -ne 100

Evaluate on the bAbI test set with a human agent (using the local keyboard as input):

python eval_model.py -m local_human -t babi:Task1k:1 -dt valid

Evaluate an IR baseline model on the validation set of the Movies Subreddit dataset:

python eval_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid

Display the predictions of that same IR baseline model:

python display_model.py -m ir_baseline -t "#moviedd-reddit" -dt valid

Build a dictionary on a bAbI "1k training examples" task 1 and save it to /tmp/dict.tsv

python build_dict.py -t babi:task1k:1 --dict-file /tmp/dict.tsv

Train a simple sequence to sequence model on the "1k training examples" bAbI task 1 with batch size of 8 examples for one epoch (requires pytorch):

python train_model.py -m seq2seq -t babi:task1k:1 -bs 8 -eps 1 -mf /tmp/model_s2s

Trains an attentive LSTM model of Chen et al. on the SQuAD dataset with a batch size of 32 examples (requires pytorch):

python train_model.py -m drqa -t squad -bs 32 -mf /tmp/model_drqa

Evaluates on an already trained SQuAD model:

python eval_model.py -t squad -mf "models:drqa/squad/model"

Interactive session on an already trained SQuAD model:

python interactive.py -mf "models:drqa/squad/model"