fix: Compute correct ROC-AUC score with link_prediction labels in one class#3845
Merged
colinbarry merged 5 commits intomasterfrom Mar 25, 2026
Merged
fix: Compute correct ROC-AUC score with link_prediction labels in one class#3845colinbarry merged 5 commits intomasterfrom
link_prediction labels in one class#3845colinbarry merged 5 commits intomasterfrom
Conversation
Contributor
Author
Tracking
Standard development
CI Testing Labels
Documentation checklist
|
018f3d6 to
ac8ed5f
Compare
|
Contributor
Author
|
BugBot run |
DavIvek
approved these changes
Mar 9, 2026
61 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



Fixes a flaky MAGE tests where occasionally, the graph - which is randomly generated - can have all labels in a single class. This is because ROC-AUC is undefined without both positives and negatives. This change returns a neutral 0.5 AUC in that edge case. Also adds deterministic tests for all-positive, all-negative, and mixed-label scenarios.
Note
Low Risk
Small, localized change to metric computation with added tests; behavior only differs in edge cases where prior code errored or produced inconsistent confusion-matrix shapes.
Overview
Fixes flaky
link_prediction_util.evaluate()metric computation when a batch contains only one label class.roc_auc_scoreis now guarded to return0.5for single-class labels, and confusion-matrix values are computed lazily withlabels=[0,1]only when TP/FP/TN/FN metrics are requested. Addspytestcoverage for AUC on all-positive, all-negative, and mixed-label cases.Written by Cursor Bugbot for commit ac8ed5f. This will update automatically on new commits. Configure here.