Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent default average value in F1Score (and others) compared to MultiClassF1Score #2047

Open
OmerShubi opened this issue Sep 3, 2023 · 4 comments
Labels

Comments

@OmerShubi
Copy link
Contributor

🐛 Bug

The default value for the average field in F1Score is micro:

average: Optional[Literal["micro", "macro", "weighted", "none"]] = "micro",

However, the default in MultiClassF1Score is macro:

average: Optional[Literal["micro", "macro", "weighted", "none"]] = "macro",

When using the F1Score function, it seems as though that micro will be used by default, which is misleading.
In practice, when using F1Score with task='multiclass' the default value is macro.

*The same inconsistency exists in Precision and Recall (and maybe others).

To Reproduce

See F1Score / MultiClassF1Score function calls.

Expected behavior

Maybe delete the default value in the F1Score wrapper, as it is anyways overridden?

Environment

  • TorchMetrics version (and how you installed TM, e.g. conda, pip, build from source): 0.11.4, conda
  • Python & PyTorch Version (e.g., 1.0): 3.11.3, 2.0.1
  • Any other relevant information such as OS (e.g., Linux): Linux
@OmerShubi OmerShubi added bug / fix Something isn't working help wanted Extra attention is needed labels Sep 3, 2023
@github-actions
Copy link

github-actions bot commented Sep 3, 2023

Hi! thanks for your contribution!, great first issue!

@OmerShubi
Copy link
Contributor Author

I was using an old version of torchmetrics.
In 1.1.1 the function parameters in the F1Score function (and others) are not shown in the editor (VS Code), but the misleading default value remains.

@Borda Borda added the v0.11.x label Sep 6, 2023
@Borda
Copy link
Member

Borda commented Sep 6, 2023

but the misleading default value remains.

@SkafteNicki mind check it, pls

@SkafteNicki
Copy link
Member

@Borda unsure what to do about this. Essentially, we cannot fix it without breaking some kind of backwards compatibility.
The reason that base classes F1Score, Precision, Recall etc. have the average set to micro is that was the default value before the classification refactor. After the classification refactor and the introduction of the class specific metric MulticlassF1Score, MulticlassPrecision, MulticlassRecall we set the default to macro. So the reason for this difference is that we tried to keep backward compatibility to metrics before the refactor.

@Borda Borda added waiting on author and removed help wanted Extra attention is needed labels Aug 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants
@Borda @SkafteNicki @OmerShubi and others