-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Keep getting scores of '0' no matter what input used #190
Comments
Hey @Brecony76. I am not able to replicate this error. I just tried it and I get the following scores: Prediction([('scores', [0.8417137265205383, 0.7745385766029358]), ('system_score', 0.8081261515617371)]) |
Hi @Brecony76 I'm observing the same issue.
The behavior is particularly odd, because sometimes it does actually return a score, with no change in code or data... I'm not sure how to reproduce the 0.0 scores, nor the proper scores. Sometimes it just works, sometimes it doesn't. I will retest this tomorrow, to see if I can make any sense of it. For now I completed my task of evaluating some translations with Comet (thanks to the devs and researchers for making this so intuitive!) |
I can confirm that this issue exists on Windows. It might be related to this CUDA warning:
But I am not sure and do not have time to dig into this deeper. It is a shame though, as this makes COMET unfortunately unreliable on Windows. |
I've done some digging but haven't found a solution, although I have pinpointed the place in the PL Trainer where something goes wrong. The model weights are turned to zero but I do not know why. To put this into higher priority, feel free to comment on the issue that I raised over at PyTorch Lightning to indicate that you are also experiencing this problem. Lightning-AI/pytorch-lightning#19537 |
I left a reply in Lightning-AI/pytorch-lightning#19537 (comment) with a suggestion. I hope it provides some useful insights. |
What is your question?
I keep getting scores of 0 no matter what input I give it
Code
`from comet import download_model, load_from_checkpoint
model_path = download_model("Unbabel/wmt22-comet-da")
model = load_from_checkpoint(model_path)
data = [
{
"src": "10 到 15 分钟可以送到吗",
"mt": "Can I receive my food in 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
},
{
"src": "Pode ser entregue dentro de 10 a 15 minutos?",
"mt": "Can you send it for 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
}
]
if name == 'main':
model_output = model.predict(data, batch_size=8, gpus=1)
print(model_output)
print(model_output["scores"]) # sentence-level scores
print(model_output["system_score"]) # system-level score
`
-output
Prediction([('scores', [0.0, 0.0]), ('system_score', 0.0)])
[0.0, 0.0]
0.0
What's your environment?
The text was updated successfully, but these errors were encountered: