Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

F1 Macro score wrong(?) when label never appears #300

Closed
ejohb opened this issue Jun 16, 2021 · 2 comments 路 Fixed by #303
Closed

F1 Macro score wrong(?) when label never appears #300

ejohb opened this issue Jun 16, 2021 · 2 comments 路 Fixed by #303
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Milestone

Comments

@ejohb
Copy link

ejohb commented Jun 16, 2021

馃悰 Bug

When F1 (Macro) is calculated on a batch where one (or more) labels never appear, the result is <1.0 even when all predictions are correct. This seems very counterintuitive. Is it expected?

To Reproduce

import torch
from torchmetrics import F1

f1=F1(num_classes=2, average='macro')(preds=torch.tensor([0,0]), target=torch.tensor([0,0]))
print(f1)

Output: 0.5.

This occurs if one of my batches lacks one (or more) of the labels (my real use-case has many more classes).

Expected behavior

I'm not an expert on how F1 is calculated, but I would expect the output to be 1.0.

Environment

pytorch-lightning==1.3.1
torch==1.8.1+cu111
torchmetrics==0.2.0
torchvision==0.9.1+cu111
@ejohb ejohb added bug / fix Something isn't working help wanted Extra attention is needed labels Jun 16, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@SkafteNicki
Copy link
Member

seems related to #295

@Borda Borda added this to the v0.4 milestone Jul 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants