Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unrealistic perplexity #17

Open
GoogleCodeExporter opened this issue Jul 16, 2015 · 3 comments
Open

Unrealistic perplexity #17

GoogleCodeExporter opened this issue Jul 16, 2015 · 3 comments

Comments

@GoogleCodeExporter
Copy link

I'm trying to evaluate 5-gram model on a Vietnamese corpus but the perplexity 
doesn't seem to be right...


What steps will reproduce the problem?
1. Download and extract problem.zip
2. Follow the README file


What is the expected output? What do you see instead?

The result from BerkeleyLM and SRILM should be comparable but in fact 
BerkeleyLM return an unrealistic perplexity of around 1.


What version of the product are you using? On what operating system?

1.1.5 on Ubuntu.

Please provide any additional information below.

Original issue reported on code.google.com by ngocminh...@gmail.com on 12 Feb 2014 at 3:27

Attachments:

@GoogleCodeExporter
Copy link
Author

Sorry for taking so long to get back. My first guess is that it has something 
to do with the score for unseen words. Can you verify that running scoring the 
data you generated the LM from (so that there are no unknown words) with both 
SRILM and BerkeleyLM gives similar results? Otherwise, it might be some ugly 
character encoding issues. 

Original comment by adpa...@gmail.com on 18 Feb 2014 at 12:09

@GoogleCodeExporter
Copy link
Author

It is better but still an order of magnitude smaller (in absolute value)
than that of SRILM. My corpus is encoded in UTF-8. Vietnamese text makes
heavy use of accented characters which cannot be represented in ASCII.


$ . ./env.sh

$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.MakeKneserNeyArpaFromText 5 segmented.arpa
$SEGMENTED_CORPUS_TRAIN
$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.MakeLmBinaryFromArpa segmented.arpa segmented.binary
$ java -ea -mx1000m -server -cp berkeleylm.jar
edu.berkeley.nlp.lm.io.ComputeLogProbabilityOfTextStream segmented.binary
$SEGMENTED_CORPUS_TRAIN
Log probability of text is: *-67358.47160708543*

$ ngram-count -ukndiscount -order 5 -lm segmented.srilm.arpa -text
$SEGMENTED_CORPUS_TRAIN
$ ngram -lm segmented.srilm.arpa -ppl $SEGMENTED_CORPUS_TRAIN
file segmented.train.txt: 68197 sentences, 1.54738e+06 words, 0 OOVs
0 zeroprobs, logprob= *-3.16751e+06* ppl= 91.3297 ppl1= 111.435

Original comment by ngocminh...@gmail.com on 18 Feb 2014 at 12:26

@GoogleCodeExporter
Copy link
Author

Interesting. Full disclosure: I don't have time to do real debugging anymore 
myself, so I think you're largely on your own. SRILM by default does different 
things with modified KN smoothing and computation of discount factors. At one 
point, I made sure they did exactly the same thing for some simplified settings 
of SRILM, but I couldn't tell you what those settings are. 

If I were you, I check very short sentence with very common words. Most of the 
difference between SRILM and BerkeleyLM happens for low-count words, so the 
difference should shrink if that's all that's going on.

Original comment by adpa...@gmail.com on 18 Feb 2014 at 12:42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant