Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEA Add support for micro averaging in ovr-roc-auc #24338

Merged
merged 20 commits into from Sep 14, 2022

Conversation

ArturoAmorQ
Copy link
Member

@ArturoAmorQ ArturoAmorQ commented Sep 2, 2022

Reference Issues/PRs

Relevant for simplifying #24200.

What does this implement/fix? Explain your changes.

Micro-averaging was not supported for the multiclass roc_auc_score. There is no reason for not doing so, as the label_binarize function is run internally.

This PR makes micro-averaging roc_auc_score accessible to the users without needing an external label_binarize.

Any other comments?

I added a test that covers this line of code and as side effect, checks for consistency with chance level.

Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. Here is some feedback. I think the example should be updated accordingly in a follow-up PR once your already ongoing PR has been reviewed and merged.

sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
ArturoAmorQ and others added 3 commits September 5, 2022 16:14
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
Copy link
Member

@ogrisel ogrisel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the new tests. They are very clean and convincing.

Just a few suggestions to clarify the intent.

sklearn/metrics/tests/test_ranking.py Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
@ogrisel ogrisel added Quick Review For PRs that are quick to review New Feature labels Sep 9, 2022
ArturoAmorQ and others added 2 commits September 9, 2022 11:49
Co-authored-by: Olivier Grisel <olivier.grisel@ensta.org>
Copy link
Member

@jjerphan jjerphan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thank you, @ArturoAmorQ.

Here are a few minor comments.

doc/whats_new/v1.2.rst Outdated Show resolved Hide resolved
sklearn/metrics/_ranking.py Outdated Show resolved Hide resolved
sklearn/metrics/_ranking.py Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Show resolved Hide resolved
ArturoAmorQ and others added 2 commits September 13, 2022 11:25
Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
sklearn/metrics/_ranking.py Show resolved Hide resolved
sklearn/metrics/tests/test_ranking.py Outdated Show resolved Hide resolved
@ArturoAmorQ
Copy link
Member Author

Thanks for your time and feedback @ogrisel, @jjerphan and @glemaitre. All your comments have been addressed.

@jjerphan jjerphan merged commit 189d2f3 into scikit-learn:main Sep 14, 2022
@jjerphan jjerphan deleted the ovr_micro branch September 14, 2022 08:53
@jjerphan
Copy link
Member

Thank you, @ArturoAmorQ!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module:metrics New Feature Quick Review For PRs that are quick to review
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants