Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Added pos_label parameter to roc_auc_score function #2616

Closed
wants to merge 1 commit into
from

Conversation

Projects
None yet
6 participants

To be able to run roc_auc_score on binary tagets that aren't {0, 1} or {-1, 1}.

Coverage Status

Coverage remained the same when pulling 6ffd0be on ilblackdragon:roc_auc-add-pos_label into f642aee on scikit-learn:master.

Owner

jaquesgrobler commented Nov 27, 2013

+1 for merge 👍

@larsmans larsmans commented on the diff Nov 28, 2013

sklearn/metrics/metrics.py
@@ -365,7 +365,7 @@ def auc_score(y_true, y_score):
return roc_auc_score(y_true, y_score)
-def roc_auc_score(y_true, y_score):
+def roc_auc_score(y_true, y_score, pos_label=None):
"""Compute Area Under the Curve (AUC) from prediction scores
Note: this implementation is restricted to the binary classification task.
@larsmans

larsmans Nov 28, 2013

Owner

Is this still true?

@arjoly

arjoly Nov 28, 2013

Owner

yes since this pr is waiting to be merged #2460

@ilblackdragon

ilblackdragon Nov 29, 2013

@arjoly How does #2460 will handle binary case? Will it still return one value for "positive" class or return ROC for both classes (i.e. no need in pos_label in this case)?

@arjoly

arjoly Nov 29, 2013

Owner

How does #2460 will handle binary case?

As it is at the moment, I haven't change the logic around the positive label handling.

Will it still return one value for "positive" class or return ROC for both classes (i.e. no need in pos_label in this case)?

It detects if y_true and y_score are in multilabel-indicator format. In that case, there isn't any ambiguity on the number of classes/labels. The format checking can be easily done by checking the number of dimension of y_true/y_score. Note taht It doesn't handle the problematic multiclass task.

Depending on the chosen averaging option, you will get one value for all binary tasks or one for each task.

Owner

arjoly commented Dec 3, 2013

I think that you should have a look to the pr #2610 of @jnothman. Should we switch to a labels arguments instead of a pos_label one? Note that you should add tests for your new feature. Have a look at sklearn/metrics/tests/test_metrics.py.

Owner

amueller commented Oct 25, 2016

closing this as no reply, also it's replaced by #6874.

@amueller amueller closed this Oct 25, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment