To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective than the classifier's own implied confidence (e.g. softmax probability for a neural network).
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
CONTRIBUTING.md
LICENSE
README.rst
TrustScore.ipynb
trustscore.py
trustscore_evaluation.py

README.rst

To Trust Or Not To Trust A Classifier

This is not an officially supported Google product

Signal for model confidence for a trained classifier, computed based on labeled training examples and the classifier's hard predictions on these examples.

See https://arxiv.org/abs/1805.11783