Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ROC/confusion graphing methods #720

Merged
merged 19 commits into from
May 8, 2020
Merged

Conversation

dsherry
Copy link
Contributor

@dsherry dsherry commented Apr 25, 2020

Fixes #697

Adds back in the capability to generate ROC plots and confusion matrices (removed in #615), now as standalone methods rather than relying on automl to compute the plot data during CV.

Changes

  • Update roc_curve to return a dict
  • Define graph_roc_curve which takes predicted vs actual and makes a graph
  • Update confusion_matrix to optionally call normalize_confusion_matrix internally
  • Define graph_confusion_matrix which takes predicted vs actual and makes a graph
  • Basic unit test coverage, at the level of what we had before, could still be improved for the plotting!

What's currently missing

  • Docs example of how to use this, will add!

Open questions

  • Should we say graph or plot here? We use plot in SearchIterationPlot. But we also moved the feature importance chart to graph_feature_importance, and this aligns with that. My default is to keep that pattern.

@dsherry dsherry changed the title Ds 697 add back plot methods Add ROC/confusion graphing methods Apr 25, 2020
@codecov
Copy link

codecov bot commented Apr 25, 2020

Codecov Report

Merging #720 into master will increase coverage by 0.01%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #720      +/-   ##
==========================================
+ Coverage   99.34%   99.35%   +0.01%     
==========================================
  Files         148      148              
  Lines        5175     5299     +124     
==========================================
+ Hits         5141     5265     +124     
  Misses         34       34              
Impacted Files Coverage Δ
evalml/pipelines/__init__.py 100.00% <100.00%> (ø)
evalml/pipelines/graph_utils.py 100.00% <100.00%> (ø)
evalml/tests/pipeline_tests/test_graph_utils.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7357bcb...8177ea5. Read the comment docs.

assert np.array_equal(conf_mat_expected, conf_mat)
conf_mat = confusion_matrix(y_true, y_predicted, normalize_method='pred')
conf_mat_expected = np.array([[2 / 3.0, np.nan, 0], [0, np.nan, 1 / 3.0], [1 / 3.0, np.nan, 2 / 3.0]])
assert np.allclose(conf_mat_expected, conf_mat, equal_nan=True)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np.allclose is cool! Setting equal_nan=True means it handles nans. I'm liking this more than array_almost_equal. I think they've deprecated that in recent versions actually, should check.

Copy link
Collaborator

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - just some clarifying questions.

Arguments:
y_true (pd.Series or np.array): true binary labels.
y_pred (pd.Series or np.array): predictions from a binary classifier.
normalize_method ({'true', 'pred', 'all'}): Normalization method. Supported options are: 'true' to normalize by row, 'pred' to normalize by column, or 'all' to normalize by all values. Defaults to 'true'.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious about your thoughts on 'true', 'pred' and 'all'. It seems great if we're following the sklearn API but it always seemed confusing to me. IMO axis would be more clear.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd argue that using 'true' / 'pred' is more helpful because users don't have to figure out which axis corresponds to what (x-axis == true? x-axis == predicted values?); they get direct access to what they want to normalize.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I'm misunderstanding what you mean by axis 😅

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm when you put it like that it makes sense 😄

return sklearn_roc_curve(y_true, y_pred_proba)
fpr_rates, tpr_rates, thresholds = sklearn_roc_curve(y_true, y_pred_proba)
auc_score = sklearn_auc(fpr_rates, tpr_rates)
return {'fpr_rates': fpr_rates,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this change - much more clear for users!

name='ROC (w/ AUC {:06f})'.format(roc_curve_data['auc_score']),
line=dict(width=3)))
data.append(_go.Scatter(x=[0, 1], y=[0, 1],
name='Random Chance',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Little nitpick but I like Random Guess over Random Chance 😄

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion! I actually prefer "Trivial Model" or something like that -- lmk what you think

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats good as well!

labels = conf_mat.columns
reversed_labels = labels[::-1]

title = 'Confusion matrix {}{}'.format(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit-picking but there's an extra space between "Confusion matrix" and the rest of the title!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks wil fix

'' if normalize_method is None else (' normalized using method "' + normalize_method + '"'))
z_data, custom_data = (conf_mat, conf_mat_normalized) if normalize_method is None else (conf_mat_normalized, conf_mat)
primary_heading, secondary_heading = ('Raw', 'Normalized') if normalize_method is None else ('Normalized', 'Raw')
hover_text = '<br><b>' + primary_heading + ' Count</b>: %{z}<br><b>' + secondary_heading + ' Count</b>: %{customdata:.3f} <br>'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also nit-picking but it'd be nice if "Raw Count" was always an integer?

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, good idea!

return conf_mat


def graph_confusion_matrix(y_true, y_pred, normalize_method='true', title_addition=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

title_addition is a cool addition to both graphing methods! Would it be possible to add a test for them?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! Good point :)

Copy link
Contributor

@angela97lin angela97lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for adding this back in! I just had a few suggestions :)

@dsherry dsherry force-pushed the ds_697_add_back_plot_methods branch from 914b8c9 to f533a81 Compare April 27, 2020 14:28
@dsherry
Copy link
Contributor Author

dsherry commented Apr 27, 2020

@angela97lin @jeremyliweishih thanks for the great comments! And I apologize for disturbing your weekends--this was work I did Friday before signing off, but in the future I'll wait until Monday 😂

I've addressed all the comments. Outstanding:

  • I'd like to move plot_utils.py to graph_utils.py. We can chat about this at the team meeting
  • I need to add an example to the docs. The "search results" page is where we had these plots before. I did run into a potential issue with confusion matrix while doing that though so I have to figure that out.

IMO, this shouldn't block the release; it's ok if this doesn't get into v0.9.0.

@dsherry dsherry force-pushed the ds_697_add_back_plot_methods branch from f533a81 to 8514d07 Compare May 8, 2020 14:19
@dsherry
Copy link
Contributor Author

dsherry commented May 8, 2020

I've addressed the TODOs above. This is ready to go!

colorscale='Blues'),
layout=layout)
# plotly Heatmap y axis defaults to the reverse of what we want: https://community.plotly.com/t/heatmap-y-axis-is-reversed-by-default-going-against-standard-convention-for-matrices/32180
fig.update_yaxes(autorange="reversed")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@angela97lin : idk if you remember, but a couple weeks ago I mentioned I was having trouble getting the confusion matrix to plot out in the right order. Turns out that's because the y axis on plotly.Heatmap is the reverse of the input data by default! As you can see in the link I posted in the code comment above, they did that because in the field of image processing, images are typically stored in matrices with the y axis inverted.

Long story short, this inversion fixes the problem without the need for us to invert the labels or data :) lmk if you spot anything funky with this code.

@dsherry dsherry merged commit 6724d10 into master May 8, 2020
@dsherry dsherry deleted the ds_697_add_back_plot_methods branch May 8, 2020 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add methods to plot ROC and confusion matrix
3 participants