-
-
Notifications
You must be signed in to change notification settings - Fork 25.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG+1] fix P/R/F for truncated range(n_labels) #10377
Conversation
for ex. if n_labels = 5, then passing labels = [0, 1, 2] will give results similar to labels = [0, 1, 2, 3, 4], neglecting the value of labels
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
otherwise LGTM
@@ -197,6 +197,13 @@ def test_precision_recall_f_extra_labels(): | |||
assert_raises(ValueError, recall_score, y_true_bin, y_pred_bin, | |||
labels=np.arange(-1, 4), average=average) | |||
|
|||
y_true = np.array([[0, 1, 1], [1, 0, 0]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it's hard to see what this is testing, it would be good to add a comment saying that it tests non-regression on issue #xxx.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There isn't any issue for this, should I instead refer this PR itself?
Please add an entry to the change log at |
I've made the asked change, though I've commented #PR number instead of #issue_number. I think that is fine as well. |
Fixes #10307 |
Flake8 errors? |
Doesn't it seem like so to me. It says:
Here is the link to failing travis job https://travis-ci.org/scikit-learn/scikit-learn/jobs/323659077 . Some random failure? |
sorry i didn't check properly. Probably just needs a restart
|
p, r, f, _ = precision_recall_fscore_support(y_true, y_pred, | ||
average='samples', | ||
labels=[0, 1]) | ||
assert_almost_equal(np.array([p, r, f]), np.array([3. / 4, 1., 5. / 6])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have imported division from future on the top so let's remove the useless .
for the float.
@gxyd Thanks!!! |
@glemaitre was it necessary the changes in commit that you added? |
@glemaitre never mind. I just saw your comment here #10377 (comment) . Thanks @jnothman @glemaitre for review. |
@gxyd I decided that it was not worth to make a round of change/review for such a nitpick :) |
it's backward compatible with `__future__.division`, which is more than
sufficient for tests and library code, though you're right for examples
where we expect users to cut and paste snippets
|
What does this implement/fix? Explain your changes.
Fixes #10307
for ex. if n_labels = 5, then passing labels = [0, 1, 2]
will give results similar to labels = [0, 1, 2, 3, 4], neglecting
the value of labels.
Currently in master branch:
I'm not sure if this will require any changes to test_common, since labels=[0, 1] and labels=[1, 0] should give the same result for
average='samples'
(atleast foraverage='samples'
, haven't thought about other averages), this is similar to commutative property w.r.t.labels
.