Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Cost-Benefit Matrix objective for binary classification #1038

Merged
merged 23 commits into from Aug 13, 2020

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented Aug 7, 2020

@angela97lin angela97lin self-assigned this Aug 7, 2020
@codecov
Copy link

codecov bot commented Aug 10, 2020

Codecov Report

Merging #1038 into main will increase coverage by 5.58%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #1038      +/-   ##
==========================================
+ Coverage   94.32%   99.91%   +5.58%     
==========================================
  Files         183      187       +4     
  Lines       10167    10278     +111     
==========================================
+ Hits         9590    10269     +679     
+ Misses        577        9     -568     
Impacted Files Coverage Δ
...alml/objectives/binary_classification_objective.py 100.00% <ø> (ø)
evalml/objectives/fraud_cost.py 100.00% <ø> (ø)
evalml/pipelines/__init__.py 100.00% <ø> (ø)
evalml/objectives/__init__.py 100.00% <100.00%> (ø)
evalml/objectives/cost_benefit_matrix.py 100.00% <100.00%> (ø)
evalml/pipelines/graph_utils.py 100.00% <100.00%> (+50.79%) ⬆️
.../tests/objective_tests/test_cost_benefit_matrix.py 100.00% <100.00%> (ø)
evalml/tests/pipeline_tests/test_graph_utils.py 100.00% <100.00%> (+48.92%) ⬆️
evalml/tests/utils_tests/test_graph_utils.py 100.00% <100.00%> (ø)
evalml/utils/__init__.py 100.00% <100.00%> (ø)
... and 28 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update bfd216d...ac185c5. Read the comment docs.

.. warning::

    **Breaking Changes**
@angela97lin angela97lin marked this pull request as ready for review August 10, 2020 22:02
@angela97lin angela97lin added this to the August 2020 milestone Aug 11, 2020
Copy link
Contributor

@freddyaboulton freddyaboulton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@angela97lin I think this looks good! I left some minor questions and comments.

evalml/objectives/cost_benefit_matrix.py Outdated Show resolved Hide resolved
evalml/utils/gen_utils.py Outdated Show resolved Hide resolved
evalml/objectives/cost_benefit_matrix.py Show resolved Hide resolved
greater_is_better = True
score_needs_proba = False

def __init__(self, true_positive, true_negative, false_positive, false_negative):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with this objective but why would someone apply non-zero costs for true positives and true negatives?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I understand better now after looking at the example linked in the notes. Maybe using payoff instead of cost in the docstring (and maybe parameter name too) would be less confusing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha yes, we decided on this name after a discussion with @dsherry and @kmax12, but definitely open to more suggestions and discussion! Was also thinking it could make sense to make it more obvious that these parameters refer to the cost associated with each value. 🤔

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. To put my two cents in, I was confused because costs are typically non-negative and something to minimize but here the objective has greater_is_better so I vote to change the docstring or the greater_is_better flag.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@freddyaboulton so your confusion was that when you saw the word "cost," you thought it was referring to currency, rather than the ML cost function?

Regardless, I think the current naming works and that we should just do whatever is necessary to make the docs clear.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought it was weird that I would incur a cost for correctly identifying a true positive or true negative and that in this case we're trying to maximize the cost (I typically think of cost as being minimized in ML).

I think using "payoff" or "reward" would be clearer but I agree that the parameter names work and we should just change the docs! And maybe I'm alone in my confusion! 🤣

evalml/tests/objective_tests/test_cost_benefit_matrix.py Outdated Show resolved Hide resolved
Copy link
Contributor

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Is the plan to reimplment fraud cost after this?

cost_matrix = np.array([[self.true_negative, self.false_positive],
[self.false_negative, self.true_positive]])

total_cost = np.multiply(conf_matrix.values, cost_matrix).sum()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very clean 🧼

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet, matrix math for the win!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It always bothered me that they called this method "multiply" when what its really doing is element-wise multiplication, and to get true matrix multiplication you have to do matmul, lol 🤷‍♂️

@angela97lin
Copy link
Contributor Author

@jeremyliweishih Haha good question! I think there's a lot of similarities between the two but they're also slightly different since FraudCost takes in features (X) and uses them to calculate the score, while this is feature-agnostic :d

Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@angela97lin this is really cool!! I agree with what you mentioned the other day: I was expecting this would take more code. I think that's a sign that our APIs are working well ✨😂

This is good to merge IMO. But let's resolve a couple points before calling this work complete:

  • If we're going to move some of the pipeline graph utils, we should move all of them. The question is, where? Gen utils is fine. Another option is that we could create a new namespace evalml/graph or evalml/understanding. We can circle back on this after this PR is merged, but we should address this before the August release because its a breaking change
  • Your PR adds API documentation for the new objective. We should also add something to the user guide and/or tutorial. Fine to do that after this PR, and we can discuss ideas elsewhere. One idea would be to replace the "Example: Fraud Detection" section in the objectives guide with a cost-benefit example. Another would be just to add a short section mentioning that objective in the objectives guide. And another idea would be to add a tutorial for it.

docs/source/api_reference.rst Show resolved Hide resolved
evalml/pipelines/graph_utils.py Show resolved Hide resolved
greater_is_better = True
score_needs_proba = False

def __init__(self, true_positive, true_negative, false_positive, false_negative):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@freddyaboulton so your confusion was that when you saw the word "cost," you thought it was referring to currency, rather than the ML cost function?

Regardless, I think the current naming works and that we should just do whatever is necessary to make the docs clear.

true_positive (float): Cost associated with true positive predictions
true_negative (float): Cost associated with true negative predictions
false_positive (float): Cost associated with false positive predictions
false_negative (float): Cost associated with false negative predictions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we call these true_positive_cost, etc?

There's an argument to be made for having default values (0) for each. I think I prefer the way it is now, where users have to specify each of the 4 costs in order to use the objective. So, no change needed there IMO, just sharing that thought.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that sounds good to me! I also was wondering about using default values or not but liked the idea that the user should really consider each of these parameters so decided against it. tldr; I think we're in agreement :D

cost_matrix = np.array([[self.true_negative, self.false_positive],
[self.false_negative, self.true_positive]])

total_cost = np.multiply(conf_matrix.values, cost_matrix).sum()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet, matrix math for the win!

cost_matrix = np.array([[self.true_negative, self.false_positive],
[self.false_negative, self.true_positive]])

total_cost = np.multiply(conf_matrix.values, cost_matrix).sum()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It always bothered me that they called this method "multiply" when what its really doing is element-wise multiplication, and to get true matrix multiplication you have to do matmul, lol 🤷‍♂️

y_true = pd.Series([0, 1, 2])
y_predicted = pd.Series([1, 0, 1])
with pytest.raises(ValueError, match="y_true contains more than two unique values"):
cbm.score(y_true, y_predicted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot to comment on your unit tests. Look good! A couple minor suggestions:

  • Try float cost instead of int, make sure math still works
  • What happens if cost is None, could just raise error in __init__, if true_positive_cost is None or true_negative_cost is None or ...: raise InvalidParameterException('...')
  • Can confusion_matrix return anything weird/invalid? Incorrect dimensions?

@angela97lin
Copy link
Contributor Author

@dsherry Thanks for your comments!!

RE consolidating graph utils, I filed #1053 since I didn't want to introduce too many line changes unrelated to this PR--I've assigned the issue to myself and will put up a PR for it shortly after this.
RE adding tutorials: Agreed! That's what #1027 tracks 😄

I'll address your test comment and update this PR according.

@angela97lin angela97lin merged commit 81356e6 into main Aug 13, 2020
@angela97lin angela97lin deleted the 1025_cost_benefit_obj branch August 13, 2020 20:23
@dsherry dsherry mentioned this pull request Aug 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement Cost-Benefit Matrix objective for binary classification
4 participants