New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing BLEU tokenizer behavior and making the default tokenizer 13a #20
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this! This looks good to me, just left two remarks.
PS: for the code quality to pass you might need to run make quality && make style
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing thanks !
... ] | ||
>>> bleu = evaluate.load_metric("bleu") | ||
>>> results = bleu.compute(predictions=predictions, references=references) | ||
>>> print(results) | ||
{'bleu': 0.6370964381207871, 'precisions': [0.8333333333333334, 0.75, 1.0, 1.0], 'brevity_penalty': 0.7165313105737893, 'length_ratio': 0.75, 'translation_length': 6, 'reference_length': 8} | ||
{'bleu': 1.0, 'precisions': [1.0, 1.0, 1.0, 1.0], 'brevity_penalty': 1.0, 'length_ratio': 1.1666666666666667, 'translation_length': 7, 'reference_length': 6} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any idea why the results are not the same ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a second reference
compared to the previous example (to illustrate the functionality).
I triple-checked though, and the results are the same for the previous version and the current version of the code! 🤗
Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
|
As it was previously discussed in an issue from 2020, inconsistent tokenization in BLEU can cause reproducibility issues. The proposed solution was to use tokenizer
13a
, such as is the default case for SacreBLEU and WMT.After discussion with @lhoestq , I made this the default dehavior of the BLEU implementation, however making it possible to use other tokeniizers such as NLTK's
word_tokenize
.I also updated the README to reflect these changes and to further discuss the impact that tokenization can have on reproducibility of BLEU scores.
Please let me know what you think, @lvwerra and @lhoestq !
cc @thomwolf cause you were involved in the original discussion in the issue I linked above.