-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION]Interpretation and comparison of COMET scores across several languages #110
Comments
Hi @clairehua1, You should avoid comparing scores between languages and even between domains. This is not just for COMET but for any MT Metric. For example BLEU, even tho is lexical, highly depends on the underlying tokenizer thus the results vary a lot between different languages. PS: even human annotation has a lot of variability between languages and domains. If we want reliable and comparable results we need to make sure the test conditions are the same (same data, same annotators) Cheers, |
Thanks for the answer Ricardo! Is there a way to interpret the COMET score other than using it as a ranking system? |
@clairehua1 for a specific setting (language pair and domain) you could plot the distribution of scores and analyse it by looking at quantiles. The scores usually follow a normal distribution. To give a bit more context most models are trained to predict a z-normalized direct assessment (a z-score). Z-scores have a mean at 0 and follow a normal distribution which means that ideally a score of 0 should represent an average translation. In practise the distribution of scores (for the default models |
In the plots above you can see how different is the scores between English-German and English-Hausa. But you can see that the "peak" for German is a bit higher than Hausa. Nonetheless this is expected due to the fact that German translations tend to have better quality than Hausa ones. |
❓ Questions and Help
Before asking:
What is your question?
Code
What have you tried?
What's your environment?
The text was updated successfully, but these errors were encountered: