Automatically exported from code.google.com/p/incremental-top-down-parser
C Other
Latest commit 9866b64 Mar 25, 2013 roarkbr@gmail.com updated README
Permalink
Failed to load latest commit information.
bin
data
include
lib
Makedefs
README
parse.model.wsj.slc.p05

README

// README
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
//     http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// Copyright 2004-2013 Brian Roark
// Author: roarkbr@gmail.com  (Brian Roark)

Incremental Top-Down Parser README

The incremental top-down parser is a statistical syntactic parser that
processes input strings from left-to-right, producing partial
derivations in a top-down manner, using beam search as detailed in
Roark (2001) and Roark (2004). It can output parser state statistics
of utility for psycholinguistic studies, as detailed in Roark et al.
(2009).

Note that this code is research code, provided to the community
without promise of additional technical support. Many researchers have
used this code in their research, and were able to successfully
compile and use the code on linux and Mac OS X systems. If you follow
the instructions below, you should be able to use it as well, but if
you encounter problems you are pretty much on your own.

In addition to the parser and the model training code, a model trained
on sections 2-21 of the Penn WSJ treebank has been included. For those
who want to train their own models, there are some scripts for
converting Penn Treebank files from the 'pretty-print' representation
provided by the LDC to the format expected by the model training
routine: one parse per line, with no function tags or empty nodes,
etc. You are free to change your non-terminal labels (e.g., changing
VBs to AUXs, etc.) but the code assumes that POS-tags and non-POS-tag
non-terminals are disjoint.

====== INSTALLATION =======

To compile, you will need to change the install directory in Makedefs,
then cd to the bin directory and call make all. Linux requires -lm
flag, Mac OSX doesn't. Copy Makefile.MAC to Makefile for Mac OSX; copy
Makefile.linux to Makefile for linux systems.

====== USAGE =======

Parsing:

To get information on usage, type the following on the command line: 

bin/tdparse -?

The following is standard usage for parsing a file of strings
'mystrings.txt':

bin/tdparse -v -F parse.output parse.model.wsj.slc.p05 mystrings.txt

The strings file 'mystrings.txt' should have one string per line (see
the file data/test.test.txt).

The parser output comes with three numbers before the parse:

  cand  tot-cands   -logCondProb    (TOP...

This is for distinguishing between candidates in k-best parsing.

To evaluate with just one candidate:

sed 's/.*(TOP/(TOP/g' parse.output >parse.best

Then:

evalb -p data/evalb/COLLINS.prm mytrees.gold.txt parse.best >parse.eval


Training models:

To train a model, you must first prepare a training treebank. Scripts
have been provided to assist in converting Penn Treebank style parses
from their raw format as distributed by the LDC to the format expected
by the model training code. Briefly, that code expects: 1) 1 parse per
line in the text file; 2) no empty nodes; 3) a single TOP category,
with only unary productions to the root non-terminal of the tree; 4)
disjoint POS-tag and non-POS-tag non-terminal sets. Our scripts make
use of tsurgeon available here:

http://nlp.stanford.edu/software/tregex.shtml

Once that has been installed, modify the data/tbnorm/tbnorm.sh script
to include the path to the tsurgeon directory. Then, cd to data/tbnorm
and:

./tbnorm.sh raw.treebank.file >normalized.treebank.file

This script does not guarantee condition 4 above, that the POS-tags
are disjoint from the other non-terminals. The tbtest.sh script tests
that this is the case:

./tbtest.sh normalized.treebank.file

This will report non-terminals used as both POS-tags and non-POS-tag
non-terminals in the given treebank file.

See data/test.train.txt to see a small, fake treebank in the required
format.

To get information on usage, type the following on the command line: 

bin/tdptrain -?

The following is standard usage for training a model from a given
treebank:

bin/tdptrain -t0.05 -p -s"(SINV(S" -v -F parse.model.slc.p05 -m data/tree.functs.sfile normalized.treebank.file

Please see the usage from the command line for further options.

====================
new verbose output:

If you run with the -p command line switch, it outputs word-by-word
scores that are available from standard running of the parser,
including the surprisal measures and other internal state statistics.
There are column headers, and I'll explain each of them in turn:

pfix header

In this column, we get some info on the particular row. For each word
w, this will read 'pfix#w' where # can range over three symbols: '-'
are tokens which 'belong' to the previous token, such as punctuation
or contractions or end-of-string; '+' are closed class lexical items;
and ':' are anything else. non-lexical and closed class words can are
identified by including the following switch when parsing:

-c data/closed.class.words

and that file can be modified as you like. In that file, 0 following
the word means punct. or contraction; 1 following the word means
closed class.

prefix

This column gives prefix probability, sum of probs over the beam.

srprsl nolex    lex

These three columns give total surprisal, syntactic surprisal, and
lexical surprisal, as defined in the paper.

ambig

This column gives the entropy over the current beam, essentially a
measure of ambiguity. If p is the conditional probability of each
analysis in the normalized beam, then this is -\sum p log p over the
beam.

open

This column gives the weighted average open brackets over the beam.
(Easy to derive from the parser, not of any particular utility so
far.)

rernk toprr

These two columns examine what happens to the previously top-ranked
parses. I'll explain this informally by abusing notation. Let T_i be
the best scoring parse after word i, and let T^+_i be the set of
parses after word i+1 that are extensions of parse T_i. Then rernk is
p(T^+_i)/p(T_i) where the p in this case is normalized over the beam
(hence ratio can be greater than 1). A score of 1 means that these
analyses occupy about the same amount of conditional probability mass
in the beam after one step; a score greater than 1 means they are now
occupying more of the conditional probability mass; and less than 1
means they occupy less. A score of zero means it has fallen out of the
beam.

The second column here just looks at the top ranked parse at time i+1.
If it is an extension of the top ranked parse at time i, then the
ratio of conditional prob is provided; otherwise zero.

stps

Finally, we look at the weighted average (over the normalized beam) of
derivation steps from the previous word.

Part II: measures that cannot be derived from standard operation of
the parser. These include the entropy measures, which are calculated
by extending the current parse to all possible following words (hence
running with this switch is slllllloooooowwwwww). To get the entropy
scores, run with the -a switch, and it will provide additional scores
related to entropy.

afix header

just like prefix header, with closed class coding, etc.

            entropy  noLexH    LexH

total entropy, Syntactic entropy and lexical entropy, as described in
the paper.

    rank  log10(rank)

These two columns show the rank of the actual next word within the
vocabulary, in terms of the conditional prob of the word given the
left context. Since we are calculating the probability of every word
in the lexicon, we can straightforwardly rank and report where the
actual word falls in the rank (and the log).

log(p*/p)

This score compares the probability of the top ranked word in the
lexicon versus the actual next word. Thus if rank is 1, this score
will be zero, otherwise greater than zero.

One last thing. There are scores attached to each output tree (by
default the 1-best, but use -k to retrieve more), and those scores
are:

candidate rank
total number of candidates in list
tree score (without norm constant)
tree negative log prob
purely syntactic contribution to that total neg log prob
lexical contribution to the neg log prob  (from POS -> word)

=========== CITING ================

The basic citation for the parsing approach is:

Brian Roark. 2001. Probabilistic top-down parsing and language
modeling. In Computational Linguistics, 27(2), pages 249-276.

If using the parser to derive parser state statistics for
psycholinguistic modeling, it's also a good idea to cite:

Brian Roark, Asaf Bachrach, Carlos Cardenas and Christophe Pallier.
2009. Deriving lexical and syntactic expectation-based measures for
psycholinguistic modeling via incremental top-down parsing. In
Proceedings of the Conference on Empirical Methods in Natural Language
Processing (EMNLP), pp. 324-333.

=========== REFERENCES ================

Brian Roark. 2011. Expected surprisal and entropy. Technical
Report #CSLU-11-004, Center for Spoken Language Processing, Oregon
Health & Science University.

Brian Roark, Asaf Bachrach, Carlos Cardenas and Christophe Pallier.
2009. Deriving lexical and syntactic expectation-based measures for
psycholinguistic modeling via incremental top-down parsing. In
Proceedings of the Conference on Empirical Methods in Natural Language
Processing (EMNLP), pp. 324-333.

Michael Collins and Brian Roark. 2004. Incremental Parsing with the
Perceptron Algorithm. In Proceedings of the 42nd Annual Meeting of the
Association for Computational Linguistics (ACL), pages 111-118.

Brian Roark. 2004. Robust garden path parsing. Natural Language
Engineering, 10(1), pages 1-24.

Brian Roark. 2001. Robust Probabilistic Predictive Syntactic
Processing: Motivations, Models, and Applications. Ph.D. Thesis,
Department of Cognitive and Linguistic Sciences, Brown University.

Brian Roark. 2001. Probabilistic top-down parsing and language
modeling. In Computational Linguistics, 27(2), pages 249-276.

Mark Johnson and Brian Roark. 2000. Compact non-left-recursive
grammars using the selective left-corner transform and factoring. In
Proceedings of the 18th International Conference on Computational
Linguistics (COLING), pages 355-361.

Brian Roark and Mark Johnson. 1999. Efficient probabilistic top-down
and left-corner parsing. In Proceedings of the 37th Annual Meeting of
the Association for Computational Linguistics, pages 421-428.