Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for missing classes #511

Merged
merged 12 commits into from Oct 31, 2022
Merged

Conversation

cgnorthcutt
Copy link
Member

@cgnorthcutt cgnorthcutt commented Oct 28, 2022

Extends cleanlab to support missing classes

This PR adds support when pred_probs.shape[1] > len(set(labels)) for most of the main methods in the count and filter modules. See examles below.

Completed

  • Missing classes should now work for both single-labeled data and multi-labeled data (list of lists) for methods in count and filter modules. I tested compute_confident_joint, find_label_issues, and get_confident_thresholds.
  • Added two tests to verify basic functionality.
  • All other tests still pass.

If you're curious, the most complicated parts of this PR was getting everything to work smoothly with multi_label which we did not plan for in advance (including parallelization). It should work now, but more testing is needed.

TODO

  • [] Did not check if CleanLearning works with new functionality. Only supported filter and count modules.
  • [] Much more extensive testing should be added. (recommended tests below)

Tests TODO (strongly recommend adding/checking these tests)

  • Check that all functions in count and filter work with missing classes, not just the three main ones I checked.
  • Check that CleanLearning works with missing classes.
  • Check that this works for all combinations of hyper-parameters, particularly al filter_by methods and for multi_label=True/False
  • Check that this works if setting n_jobs=1 versus using parallelization (not setting n_jobs) because I had to edit how the parallelization works with multi_label to get this to work.
  • Replicate variations of my sanity checks below (see screenshots)
  • Double check that the classes are mapping correctly for all methods.
  • Check that cleanlab.count.estimate_py_noise_matrices_and_cv_pred_proba works if you pass in labels [0, 1, 3]
  • (general check for multi_label) Check that for a dataset of 1000 datapoints, replacing the label of a single item from [0] --> [0,1] to make it multi_label does not have a significant impact on the predicted_probabilities produced. (this is how we'll know multi_label seems to work correctly.

Useful (and illustrative) sanity checks

1. removing a class by changing the fifth label from 1 --> 2

image

LOOKS GOOD! Confident joint updates correctly. Label errors update correctly.

2. Does multi-label work with missing classes?

labels = [[0], [0], [0], [0, 2], [2], [2], [2], [2]]
# Third and fourth examples are the label errors
pred_probs = np.array([[0.9, 0.1, 0.0], [0.8, 0.1, 0.1], [0.1, 0.0, 0.9], [0.9, 0.0, 0.1],
                       [0.1, 0.3, 0.6], [0.1, 0.0, 0.9], [0.1, 0.0, 0.9], [0.1, 0.0, 0.9],
                      ])
cj = cleanlab.count.compute_confident_joint(labels, pred_probs, multi_label=True)
issues = cleanlab.filter.find_label_issues(labels, pred_probs, multi_label=True, filter_by='prune_by_class')
print('label issues found:', issues)
print(cj)

OUTPUT (looks good!)

label issues found: [False False  True  True False False False False]
[[3 0 1]
 [0 0 0]
 [1 0 4]]

Two logical issues to be aware of (this PR does NOT renormalize predicted probabilities)

These are not errors, but consider alternatives... currently if the predicted probabilities are high on a missing class, a label error will not be detected since there are no labels in that class and thus no data for cleanlab to work with for that class. That probability mass is effectively lost. I left as is because I recommend merging this PR first. This PR handles the software mechanics of supporting missing classes. Its complicated enough as is. Easy to decide if you want to renormalize once this PR is merged.

1. Example of label error not counted because prob mass on missing class

image

2. Example of label error no longer being detected as probability mass shifts to a missing class

image

See the fourth True/False value

@cgnorthcutt
Copy link
Member Author

Please edit directly. I don't plan to make further edits.

@codecov
Copy link

codecov bot commented Oct 31, 2022

Codecov Report

Merging #511 (cb949c9) into master (b12d76b) will decrease coverage by 0.28%.
The diff coverage is 94.91%.

@@            Coverage Diff             @@
##           master     #511      +/-   ##
==========================================
- Coverage   97.58%   97.30%   -0.29%     
==========================================
  Files          24       24              
  Lines        1906     1930      +24     
  Branches      381      385       +4     
==========================================
+ Hits         1860     1878      +18     
- Misses         17       21       +4     
- Partials       29       31       +2     
Impacted Files Coverage Δ
cleanlab/rank.py 97.89% <ø> (ø)
cleanlab/internal/validation.py 90.27% <50.00%> (-0.39%) ⬇️
cleanlab/internal/util.py 95.71% <93.10%> (-0.08%) ⬇️
cleanlab/classification.py 98.58% <100.00%> (ø)
cleanlab/count.py 99.45% <100.00%> (+0.01%) ⬆️
cleanlab/filter.py 96.04% <100.00%> (-2.83%) ⬇️
cleanlab/internal/latent_algebra.py 100.00% <100.00%> (ø)
cleanlab/internal/multilabel_utils.py 100.00% <100.00%> (ø)
cleanlab/internal/token_classification_utils.py 100.00% <0.00%> (ø)
... and 1 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@jwmueller jwmueller requested review from aditya1503 and removed request for jwmueller October 31, 2022 08:24
and multi-labeled labels. If multi_label is set to None (default)
this method will infer if multi_label is True or False based on
the format of labels.
This allows for a more general form of multiclass labels that looks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we extend this to all multilabel formats?
just a question, I can do this once it's merged

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only if you need it for multilabel find-label-issues to work, when some of the classes never appear in data.

Our main priority is just to get multilabel find-label-issues to work when some of the classes never appear in data.

@anishathalye anishathalye removed their request for review October 31, 2022 18:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants