Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Expected Calibration Error #218

Closed
edwardclem opened this issue Apr 30, 2021 · 6 comments 路 Fixed by #394
Closed

Add Expected Calibration Error #218

edwardclem opened this issue Apr 30, 2021 · 6 comments 路 Fixed by #394
Labels
enhancement New feature or request help wanted Extra attention is needed New metric

Comments

@edwardclem
Copy link
Contributor

馃殌 Feature

A new metric computing the Expected Calibration Error (ECE) metric from Naeini et al 2015. Useful for determining if a classifier's softmax probability scores are well-calibrated and represent reasonable probabilities.

Motivation

I ran into this metric after seeing Guo et al 2017, which discussed how very large and deep networks are vulnerable to calibration issues (i.e. are systemically over- or under- confident), and suggest temperature scaling as a method for producing reasonably calibrated softmax outputs using a validation dataset. I've implemented a simple version of this metric using the old PyTorch Lightning Metric API, mostly based off of @gpleiss's PyTorch nn.Module implementation of the metric here.

Additional context

There was an interesting preprint discussing alternatives to ECE that also might be worth integrating - the main criticism being that ECE can be reductive in multi-class settings by not considering all probabilities produced by the model and only looking at the probability of the predicted class (i.e. the highest predicted probability). I'd have to think more about this, but it could also be worth adding.

@edwardclem edwardclem added enhancement New feature or request help wanted Extra attention is needed labels Apr 30, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@SkafteNicki
Copy link
Member

@edwardclem sounds like a great addition! want to send a PR?

@edwardclem
Copy link
Contributor Author

Sure! Let me write up some test cases first. Is there any documentation I should look at about this library's test practices or should I just look at some of the other classification metric tests?

@SkafteNicki
Copy link
Member

@edwardclem how is it going here?

@edwardclem
Copy link
Contributor Author

I have the calibration error metric working and tested with the L1, L2, and max norms matching the behavior of the sklearn PR here. There appears to be an implementation difference in the bias term correction from this paper that causes those tests to fail, and I'm checking with the sklearn devs to see which one is correct. I've confirmed that if I change my torch code to exactly match the numpy code the behavior is the same, but I think there might be a small mistake in the sklearn implementation. Either way, I'm happy to open a PR so the review can start. If we want to move ahead with this feature, I can just remove the bias correction term and commit the rest. What do you think?

@SkafteNicki
Copy link
Member

Hi @edwardclem, sounds like a great plan to me (sorry for not getting back to you sooner).
Please feel free to open a PR :]

@SkafteNicki SkafteNicki linked a pull request Jul 23, 2021 that will close this issue
4 tasks
@Borda Borda changed the title [New Metric] Expected Calibration Error Add Expected Calibration Error Jan 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed New metric
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants