Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Keep getting scores of '0' no matter what input used #190

Closed
Brecony76 opened this issue Dec 9, 2023 · 5 comments · Fixed by #205
Closed

[QUESTION] Keep getting scores of '0' no matter what input used #190

Brecony76 opened this issue Dec 9, 2023 · 5 comments · Fixed by #205
Labels
question Further information is requested

Comments

@Brecony76
Copy link

Brecony76 commented Dec 9, 2023

What is your question?

I keep getting scores of 0 no matter what input I give it

Code

`from comet import download_model, load_from_checkpoint

model_path = download_model("Unbabel/wmt22-comet-da")
model = load_from_checkpoint(model_path)

data = [
{
"src": "10 到 15 分钟可以送到吗",
"mt": "Can I receive my food in 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
},
{
"src": "Pode ser entregue dentro de 10 a 15 minutos?",
"mt": "Can you send it for 10 to 15 minutes?",
"ref": "Can it be delivered between 10 to 15 minutes?"
}
]

if name == 'main':
model_output = model.predict(data, batch_size=8, gpus=1)
print(model_output)
print(model_output["scores"]) # sentence-level scores
print(model_output["system_score"]) # system-level score
`

-output
Prediction([('scores', [0.0, 0.0]), ('system_score', 0.0)])
[0.0, 0.0]
0.0

What's your environment?

  • OS: Windows 10
  • Packaging Pip 23.3.1
  • Version Comet 2.2.0
@Brecony76 Brecony76 added the question Further information is requested label Dec 9, 2023
@ricardorei
Copy link
Collaborator

Hey @Brecony76. I am not able to replicate this error. I just tried it and I get the following scores:

Prediction([('scores', [0.8417137265205383, 0.7745385766029358]), ('system_score', 0.8081261515617371)])

@clang88
Copy link

clang88 commented Jan 29, 2024

Hi @Brecony76 I'm observing the same issue.

  • OS: Windows 10
  • unbabel-comet 2.2.1
  • pip 23.3.1
  • Python 3.10.13
  • torch 2.1.2+cu121
  • Geforce 250MX (Driver Version: 537.79 CUDA Version: 12.2) (Yeah... it's my work laptop)

The behavior is particularly odd, because sometimes it does actually return a score, with no change in code or data... I'm not sure how to reproduce the 0.0 scores, nor the proper scores. Sometimes it just works, sometimes it doesn't. I will retest this tomorrow, to see if I can make any sense of it. For now I completed my task of evaluating some translations with Comet (thanks to the devs and researchers for making this so intuitive!)

@BramVanroy
Copy link
Contributor

I can confirm that this issue exists on Windows. It might be related to this CUDA warning:

[W CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]

But I am not sure and do not have time to dig into this deeper. It is a shame though, as this makes COMET unfortunately unreliable on Windows.

@BramVanroy
Copy link
Contributor

I've done some digging but haven't found a solution, although I have pinpointed the place in the PL Trainer where something goes wrong. The model weights are turned to zero but I do not know why.

To put this into higher priority, feel free to comment on the issue that I raised over at PyTorch Lightning to indicate that you are also experiencing this problem. Lightning-AI/pytorch-lightning#19537

@awaelchli
Copy link

I left a reply in Lightning-AI/pytorch-lightning#19537 (comment) with a suggestion. I hope it provides some useful insights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants