Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow unnormalized class scores for Accuracy #60

Closed
its-dron opened this issue Feb 22, 2021 · 3 comments 路 Fixed by #200
Closed

Allow unnormalized class scores for Accuracy #60

its-dron opened this issue Feb 22, 2021 · 3 comments 路 Fixed by #200
Labels
help wanted Extra attention is needed

Comments

@its-dron
Copy link

馃殌 Feature

Presently, when using Accuracy metric on multi-class with scores (N,C entry in input types), the scores are required to be probabilities in [0, 1].

However, un-thresholded accuracy can be computed without normalized probabilities as inputs, as relative ordering of scores is all that is needed.

Given that some uses of Accuracy do require normalized probabilities, we could implement this as a flag that would disable the input check.

Motivation

It is common to work with unnormalized class scores during training, especially during classification tasks, as they are used in the more-stable nn.CrossEntropyLoss. Rather than having to additionally compute a softmax just for the accuracy metric, it would be reasonable to allow usage of arbitrarily scaled input data.

I specify Accuracy because it is the use case that I ran into, but it's possible other Metrics have the same property.

Pitch

Add a flag to Accuracy (and any other applicable metrics) that disables the input range check for preds.

Alternatives

The present workaround is to apply a softmax before feeding data to your Accuracy metric.

Additional context

https://github.com/PyTorchLightning/pytorch-lightning/blob/0456b4598f5f7eaebf626bca45d563562a15887b/pytorch_lightning/metrics/functional/accuracy.py#L25

@Jumperkables
Copy link

This please

@Borda Borda transferred this issue from Lightning-AI/pytorch-lightning Mar 12, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@jspaezp
Copy link
Contributor

jspaezp commented Apr 1, 2021

Just my opinion but I feel like if this is implemented, it should come with a warning the first time it happens. Since sometimes you (I definitely) would like the metric to be calculated with the current behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants