Skip to content
Code for Futrell & Levy (2017, EACL): Noisy-Context Surprisal as a Human Sentence Processing Cost Model
Python R Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


Incremental noisy channel sentence processing model

This repository contains Python 3 code for replicating Futrell & Levy (2017, EACL).

author={Richard Futrell and Roger Levy},
title={Noisy-context surprisal as a human sentence processing cost model},
booktitle={Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers},
address={Valencia, Spain}}

To get started: pip3 install -r requirements.txt.

Figure 1

Figure 1 shows noisy-context surprisal values for grammatical and ungrammatical completions of a string generated by a PCFG grammar. The results show a crossover between English and German, whereby English has a grammaticality illusion and German doesn't. This phenomenon is called structural forgetting. To generate the model values:

import experiments
_, english = experiments.verb_forgetting_conditions(m=.5, r=.5, e=.2, s=.8)
_, german = experiments.verb_forgetting_conditions(m=.5, r=.5, e=.2, s=0)

The resulting numbers, divided by log2, are plotted against reading time data in shravanplot.R.

Figure 2

Figure 2 shows regions of different model behavior for the structural forgetting case, based on PCFG parameters and depth of embedding. To generate Figure 2, do:

import experiments
df = experiments.verb_forgetting_grid()

This will bring up a matplotlib plot of Figure 2.

Table 2

Table 2 uses data from the Google Syntactic N-grams. Supposing you have the ngrams at $PATH, use to extract the appropriate counts:

$ zcat $PATH/arcs* | python3 01 01 | sort | sh > arcs_01-01

The script takes two arguments, match_code and get_code. match_code tells the script what dependency structures to filter for. For example, 01 means a head and its direct dependent. 012 means a chain of a word w_0, w_0's dependent w_1, and w_1's dependent w_2. 011 means to look at structures with one head and two dependents. get_code tells the script which two words to extract wordforms for. The example above looks for direct dependencies and takes the wordforms of head and dependent. The table in the paper uses codes 012 01, 012 02, and 011 12.

The resulting file arcs_01_01 contains joint counts of two words in the specified dependency relationship. Now generate the vocabulary file for the frequency cutoff:

$ cat arcs_01-01 | sed "s/^.* //g" | sort | > vocab

Then use the vocab file to calculate MI with a frequency cutoff:

$ cat arcs_01-01 | python3 vocab 10000

To compare the MI of two sets of counts using a permutation test, do (for example):

$ python3 arcs_012-01 arcs_012-02 vocab 10000 500

This does a permutation test with 500 samples comparing MI in the files arcs_012-01 and arcs_012-02 with vocabulary from file vocab cutoff at the most frequent 10,000 forms.

Figure 3

Figure 3 shows the mutual information over part of speech tags for different dependency relations in several UD corpora. To replicate these numbers, do in Python:

import hdmi

Figure 4

Figure 4 shows average pmi values of pos tags at different distances in the UD 1.4 corpora. To replicate these numbers, do in Python:

import hdmi
You can’t perform that action at this time.