From ceee417c5f97460b113c2b725e54fdcbda605659 Mon Sep 17 00:00:00 2001 From: Kenneth Heafield Date: Wed, 12 Mar 2014 23:00:40 -0700 Subject: [PATCH] Update abstract --- paper/acl2014.tex | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/paper/acl2014.tex b/paper/acl2014.tex index a4b2ec32..0f280052 100644 --- a/paper/acl2014.tex +++ b/paper/acl2014.tex @@ -44,8 +44,7 @@ \maketitle \begin{abstract} We contribute a faster decoding algorithm for phrase-based machine translation. Translation hypotheses keep track of state, such as context for the language model and coverage of words in the source sentence. Most features depend upon only part of the state, but traditional algorithms, including cube pruning, handle state atomically. For example, cube pruning will repeatedly query the language model with hypotheses that differ only in source coverage, despite the fact that source coverage is irrelevant to the language model. -Our algorithm avoids this behavior by placing hypotheses into equivalence classes, masking the parts of state that matter least to the score. -Since our algorithm and cube pruning are both approximate, the improvement can be used to increase speed or accuracy. +Our algorithm avoids this behavior by placing hypotheses into equivalence classes, masking the parts of state that matter least to the score. Moreover, we exploit shared words in hypotheses to iteratively refine language model scores rather than treating language model state is atomic. Since our algorithm and cube pruning are both approximate, improvement can be used to increase speed or accuracy. When tuned to attain the same accuracy, our algorithm is 4.0--7.7 times as fast as the Moses decoder with cube pruning. \end{abstract}