Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: zero_division parameter for classification… #14900

Merged

Conversation

marctorsoc
Copy link
Contributor

@marctorsoc marctorsoc commented Sep 6, 2019

See issue #14876

What does this implement/fix? Explain your changes.

zero_division parameter for precision, recall, and friends

Any other comments?

3 possible values:

  • "warn": same behavior as before
  • 0, 1: remove the warnings and set value for the metrics to this when the metrics is ill-defined (see issue for more details and examples)

Just to clarify:

image

prec will be ZD if all predictions are negative
rec will be ZD if all labels are negative
f will be ZD if everything is negative

Note that if ZD = "warn" this means 0 + warning

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for opening the PR. Sorry I'm not able to review immediately.

doc/whats_new/v0.21.rst Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
…into prec_rec_fscore_zero_division

# Conflicts:
#	sklearn/metrics/classification.py
#	sklearn/metrics/tests/test_classification.py

lost some stuff after merging, need a review
@jnothman
Copy link
Member

jnothman commented Sep 6, 2019 via email

- Changed whats_new to 0.22
- F-score only warns if both prec and rec are ill-defined
- new private method to simplify _prf_divide
@marctorsoc marctorsoc changed the title [WIP] Issue 14876: zero_division parameter for classification metrics [MRG] Issue 14876: zero_division parameter for classification metrics Sep 7, 2019
@marctorsoc
Copy link
Contributor Author

marctorsoc commented Sep 7, 2019

thanks for opening the PR. Sorry I'm not able to review immediately.

Hi @jnothman , it's just my second PR to sklearn so I'm still learning :)

I'm having a problem with git, it says I have 9 files changed, but actually changed only 3. It's like it's comparing with a master from some days ago. For example this commit:

2119490

is in master but its changes appear in the diff. Can you guide me to fix this?

@@ -892,6 +903,12 @@ def f1_score(y_true, y_pred, labels=None, pos_label=1, average='binary',
sample_weight : array-like of shape = [n_samples], optional
Sample weights.

zero_division : string or int, default="warn"
Sets the behavior when there is a zero division. If set to
("warn"|0)/1, returns 0/1 when both precision and recall are zero
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this notation is easy enough to read. How about 'Sets the value to return when blah blah. If "warn" (default), this acts like 0 but also raises a warning.'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrote something similar, please check new version

@@ -1062,7 +1092,12 @@ def _prf_divide(numerator, denominator, metric, modifier, average, warn_for):
return result

# remove infs
result[mask] = 0.0
result[mask] = float(zero_division == 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is obfuscated. I'd rather 0.0 if zero_division in ('warn', 0) else 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in new version

sklearn/metrics/classification.py Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
fbeta = my_assert(*tmp, y_true, y_pred, beta=beta,
average=average, zero_division=zero_division)

zero_division = float(zero_division == 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

obfuscated

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simplified in the new version with two separated tests

assert_array_almost_equal(r, [0, 0, 0], 2)
assert_array_almost_equal(f, [0, 0, 0], 2)
func = precision_recall_fscore_support
my_assert = (assert_warns if zero_division == "warn"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you must do something like this, use functools.partial to capture the arguments too.

But I think tests must be very readable code, as the reader needs to be absolutely certain of their correctness to be confident that they in turn imply the corretness of the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simplified in the new version with two separated tests

fbeta = my_assert(*tmp, y_true, y_pred, beta=beta,
average=None, zero_division=zero_division)

zero_division = float(zero_division == 1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is obfuscated. I'd rather a clear, separate test checking the behaviour of zero_divison, than a tiny, unexplicit piece in a larger test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simplified in the new version with two separated tests

- better docstrings
- more explicit use of zero_division value
<https://visualstudio.microsoft.com/de/downloads/>`_.
<https://visualstudio.microsoft.com/downloads/>`_.

.. warning::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've done something strange in trying to merge in changes from master. Please try to merge in the latest master again

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

merged again master into my branch. Now only the 3 files appear

@marctorsoc
Copy link
Contributor Author

@jnothman any more comments?

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the ping.

I don't think we currently test the return value (i.e. zero_division=1) except in the case that all the labels (true and pred) are negative... we don't seem to test zero_division=1 in the zero-sample_weight case either (though it is a pretty weird case).

@@ -2065,7 +2176,8 @@ def log_loss(y_true, y_pred, eps=1e-15, normalize=True, sample_weight=None,
y_true : array-like or label indicator matrix
Ground truth (correct) labels for n_samples samples.

y_pred : array-like of float, shape = (n_samples, n_classes) or (n_samples,)
y_pred : array-like of float, shape = (n_samples, n_classes) or
(n_samples,)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please leave this as it was. Going over the line length is the best we can do really to render correctly in pydoc and Sphinx

@@ -1875,7 +2030,7 @@ def test_hinge_loss_multiclass_with_missing_labels():
np.clip(dummy_losses, 0, None, out=dummy_losses)
dummy_hinge_loss = np.mean(dummy_losses)
assert (hinge_loss(y_true, pred_decision, labels=labels) ==
dummy_hinge_loss)
dummy_hinge_loss)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not change unrelated things. It makes your contribution harder to review and may introduce merge conflicts to other pull requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I don't change this I have flake8 warning:

sklearn/metrics/tests/test_classification.py:1988:18: E127 continuation line over-indented for visual indent

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I know that this is bad PEP8... we've considered black, but not clearly decided in its favour

weights="linear"), 0.9412, decimal=4)
assert_almost_equal(cohen_kappa_score(y1, y2,
weights="quadratic"), 0.9541, decimal=4)
assert_almost_equal(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please do not change unrelated things. It makes your contribution harder to review and may introduce merge conflicts to other pull requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for that. This formatting things are so annoying, have you considered black? it's really handy

assert_almost_equal(fbeta, 0)


def test_precision_recall_f1_no_labels_average_none():
@pytest.mark.parametrize('zero_division', [0, 1])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is an exemplary use-case for parametrize given that you then need to handle the warnings case separately!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given your previous comment:

This is obfuscated. I'd rather a clear, separate test checking the behaviour of zero_divison, than a tiny, unexplicit piece in a larger test.

I decided to separate this into two tests. I think it is a lot more readable. Otherwise, there are if's or the use of functools.partial. I can go back to previous version, but honestly, if we want readability I think this is better (maybe with better names)

assert_array_almost_equal(fbeta, [0, 0, 0], 2)


def test_prf_warnings():
@pytest.mark.parametrize('zero_division', ["warn"])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure how this helps

@jnothman
Copy link
Member

jnothman commented Sep 25, 2019 via email

@marctorsoc
Copy link
Contributor Author

marctorsoc commented Sep 25, 2019

updated table in the description, it was wrong :(

prec will be ZD if all predictions are negative
rec will be ZD if all labels are negative
f will be ZD if everything is negative

- added tests for YTN or YPN to check prec/rec with zero_division value
- cleaner tests
@marctorsoc
Copy link
Contributor Author

Thanks for the ping.

I don't think we currently test the return value (i.e. zero_division=1) except in the case that all the labels (true and pred) are negative... we don't seem to test zero_division=1 in the zero-sample_weight case either (though it is a pretty weird case).

Added zero_division to a test where prec and rec both have their peculiar cases. I don't understand the last comment, do you mean passing the labels param with a label that is not present?

@jnothman
Copy link
Member

jnothman commented Sep 25, 2019 via email

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. This is looking good!

Let's see what others think about this, including the parameter name which I think is still up for debate.

Copy link
Member

@jnothman jnothman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. This is looking good!

Let's see what others think about this, including the parameter name which I think is still up for debate.

@marctorsoc
Copy link
Contributor Author

Thank you. This is looking good!

Let's see what others think about this, including the parameter name which I think is still up for debate.

Thanks!

Copy link
Member

@thomasjpfan thomasjpfan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would on_zero_division be a better name?

sklearn/metrics/classification.py Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
sklearn/metrics/classification.py Outdated Show resolved Hide resolved
@marctorsoc
Copy link
Contributor Author

Would on_zero_division be a better name?

IMHO, it's as readable as zero_division so I would keep the shortest one, but I have no strong opinion about this

All the rest of changes your proposed have been applied

@thomasjpfan
Copy link
Member

When one sets zero_division=1 is it obvious that it means: "If the denominator is zero, the value of this metric is 1"?

The logical is "If there is zero division then do something (warn, or set 0 or 1)". My concern is how just "zero_division" does not capture the "if..." part of the statement.

Maybe if_zero_division is better?

@jnothman
Copy link
Member

jnothman commented Oct 2, 2019 via email

@marctorsoc
Copy link
Contributor Author

I prefer on_zero_division to if_zero_division...

For me just zero_division is fine to be honest, but if I have to choose one I would go for on_zero_division

Here: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_validate.html

the error_score has a similar behavior. Maybe we want zero_division_score? (IMHO I find it too long. If a user has doubts given the name she should just check the docs...)

@jnothman
Copy link
Member

jnothman commented Oct 2, 2019 via email

@marctorsoc
Copy link
Contributor Author

can we have a 4th opinion on this or just take a decision on this?

@thomasjpfan
Copy link
Member

I am also fine with zero_division.

@thomasjpfan
Copy link
Member

Thank you @marctorrellas !

@thomasjpfan thomasjpfan changed the title [MRG] Issue 14876: zero_division parameter for classification metrics ENH: zero_division parameter for classification… Oct 12, 2019
@thomasjpfan thomasjpfan merged commit 7f079e3 into scikit-learn:master Oct 12, 2019
@marctorsoc marctorsoc deleted the prec_rec_fscore_zero_division branch February 26, 2022 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants