Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to PR Curve docs #1074

Merged
merged 3 commits into from
Jun 24, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/api/classifier/classification_report.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,13 +45,13 @@ Workflow Model evaluation

The classification report shows a representation of the main classification metrics on a per-class basis. This gives a deeper intuition of the classifier behavior over global accuracy which can mask functional weaknesses in one class of a multiclass problem. Visual classification reports are used to compare classification models to select models that are "redder", e.g. have stronger classification metrics or that are more balanced.

The metrics are defined in terms of true and false positives, and true and false negatives. Positive and negative in this case are generic names for the classes of a binary classification problem. In the example above, we would consider true and false occupied and true and false unoccupied. Therefore a true positive is when the actual class is positive as is the estimated class. A false positive is when the actual class is negative but the estimated class is positive. Using this terminology the meterics are defined as follows:
The metrics are defined in terms of true and false positives, and true and false negatives. Positive and negative in this case are generic names for the classes of a binary classification problem. In the example above, we would consider true and false occupied and true and false unoccupied. Therefore a true positive is when the actual class is positive as is the estimated class. A false positive is when the actual class is negative but the estimated class is positive. Using this terminology the metrics are defined as follows:

**precision**
Precision is the ability of a classiifer not to label an instance positive that is actually negative. For each class it is defined as as the ratio of true positives to the sum of true and false positives. Said another way, "for all instances classified positive, what percent was correct?"
Precision can be seen as a measure of a classifier's exactness. For each class, it is defined as the ratio of true positives to the sum of true and false positives. Said another way, "for all instances classified positive, what percent was correct?"

**recall**
Recall is the ability of a classifier to find all positive instances. For each class it is defined as the ratio of true positives to the sum of true positives and false negatives. Said another way, "for all instances that were actually positive, what percent was classified correctly?"
Recall is a measure of the classifier's completeness; the ability of a classifier to correctly find all positive instances. For each class, it is defined as the ratio of true positives to the sum of true positives and false negatives. Said another way, "for all instances that were actually positive, what percent was classified correctly?"

**f1 score**
The F\ :sub:`1` score is a weighted harmonic mean of precision and recall such that the best score is 1.0 and the worst is 0.0. Generally speaking, F\ :sub:`1` scores are lower than accuracy measures as they embed precision and recall into their computation. As a rule of thumb, the weighted average of F\ :sub:`1` should be used to compare classifier models, not global accuracy.
Expand All @@ -74,13 +74,13 @@ show it.

from sklearn.model_selection import TimeSeriesSplit
from sklearn.naive_bayes import GaussianNB

from yellowbrick.datasets import load_occupancy
from yellowbrick.classifier import classification_report

# Load the classification data set
X, y = load_occupancy()

# Specify the target classes
classes = ["unoccupied", "occupied"]

Expand Down
96 changes: 84 additions & 12 deletions docs/api/classifier/prcurve.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,7 @@
Precision-Recall Curves
=======================

Precision-Recall curves are a metric used to evaluate a classifier's quality,
particularly when classes are very imbalanced. The precision-recall curve
shows the tradeoff between precision, a measure of result relevancy, and
recall, a measure of how many relevant results are returned. A large area
under the curve represents both high recall and precision, the best case
scenario for a classifier, showing a model that returns accurate results
for the majority of classes it selects.
The ``PrecisionRecallCurve`` shows the tradeoff between a classifier's precision, a measure of result relevancy, and recall, a measure of completeness. For each class, precision is defined as the ratio of true positives to the sum of true and false positives, and recall is the ratio of true positives to the sum of true positives and false negatives.

================= ==============================
Visualizer :class:`~yellowbrick.classifier.prcurve.PrecisionRecallCurve`
Expand All @@ -18,32 +12,110 @@ Models Classification
Workflow Model evaluation
================= ==============================

**precision**
Precision can be seen as a measure of a classifier's exactness. For each class, it is defined as the ratio of true positives to the sum of true and false positives. Said another way, "for all instances classified positive, what percent was correct?"

**recall**
Recall is a measure of the classifier's completeness; the ability of a classifier to correctly find all positive instances. For each class, it is defined as the ratio of true positives to the sum of true positives and false negatives. Said another way, "for all instances that were actually positive, what percent was classified correctly?"

**average precision**
Average precision expresses the precision-recall curve in a single number, which
represents the area under the curve. It is computed as the weighted average of precision achieved at each threshold, where the weights are the differences in recall from the previous thresholds.

Both precision and recall vary between 0 and 1, and in our efforts to select and tune machine learning models, our goal is often to try to maximize both precision and recall, i.e. a model that returns accurate results for the majority of classes it selects. This would result in a ``PrecisionRecallCurve`` visualization with a high area under the curve.

Binary Classification
---------------------

The base case for precision-recall curves is the binary classification case, and this case is also the most visually interpretable. In the figure below we can see the precision plotted on the y-axis against the recall on the x-axis. The larger the filled in area, the stronger the classifier. The red line annotates the average precision.

.. plot::
:context: close-figs
:alt: PrecisionRecallCurve with Binary Classification

import matplotlib.pyplot as plt

from yellowbrick.datasets import load_spam
from sklearn.linear_model import RidgeClassifier
from sklearn.model_selection import train_test_split as tts
from yellowbrick.classifier import PrecisionRecallCurve
from yellowbrick.datasets import load_spam
from sklearn.model_selection import train_test_split as tts

# Load the dataset and split into train/test splits
X, y = load_spam()

X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, shuffle=True)
X_train, X_test, y_train, y_test = tts(
X, y, test_size=0.2, shuffle=True, random_state=0
)

# Create the visualizer, fit, score, and show it
viz = PrecisionRecallCurve(RidgeClassifier())
viz = PrecisionRecallCurve(RidgeClassifier(random_state=0))
viz.fit(X_train, y_train)
viz.score(X_test, y_test)
viz.show()

One way to use ``PrecisionRecallCurves`` is for model comparison, by examining which have the highest average precision. For instance, the below visualization suggest that a ``LogisticRegression`` model might be better than a ``RidgeClassifier`` for this particular dataset:

.. plot::
:context: close-figs
:include-source: False
:alt: Comparing PrecisionRecallCurves with Binary Classification

import matplotlib.pyplot as plt

from yellowbrick.datasets import load_spam
from yellowbrick.classifier import PrecisionRecallCurve
from sklearn.model_selection import train_test_split as tts
from sklearn.linear_model import RidgeClassifier, LogisticRegression

# Load the dataset and split into train/test splits
X, y = load_spam()

X_train, X_test, y_train, y_test = tts(
X, y, test_size=0.2, shuffle=True, random_state=0
)

The base case for precision-recall curves is the binary classification case, and this case is also the most visually interpretable. In the figure above we can see the precision plotted on the y-axis against the recall on the x-axis. The larger the filled in area, the stronger the classifier is. The red line annotates the *average precision*, a summary of the entire plot computed as the weighted average of precision achieved at each threshold such that the weight is the difference in recall from the previous threshold.
# Create the visualizers, fit, score, and show them
models = [
RidgeClassifier(random_state=0), LogisticRegression(random_state=0)
]
_, axes = plt.subplots(ncols=2, figsize=(8,4))

for idx, ax in enumerate(axes.flatten()):
viz = PrecisionRecallCurve(models[idx], ax=ax, show=False)
viz.fit(X_train, y_train)
viz.score(X_test, y_test)
viz.finalize()

plt.show()

Precision-recall curves are one of the methods used to evaluate a classifier's quality, particularly when classes are very imbalanced. The below plot suggests that our classifier improves when we increase the weight of the "spam" case (which is 1), and decrease the weight for the "not spam" case (which is 0).

.. plot::
:context: close-figs
:alt: Optimizing PrecisionRecallCurve with Binary Classification

from yellowbrick.datasets import load_spam
from sklearn.linear_model import LogisticRegression
from yellowbrick.classifier import PrecisionRecallCurve
from sklearn.model_selection import train_test_split as tts

# Load the dataset and split into train/test splits
X, y = load_spam()

X_train, X_test, y_train, y_test = tts(
X, y, test_size=0.2, shuffle=True, random_state=0
)

# Specify class weights to shift the threshold towards spam classification
weights = {0:0.2, 1:0.8}

# Create the visualizer, fit, score, and show it
viz = PrecisionRecallCurve(
LogisticRegression(class_weight=weights, random_state=0)
)
viz.fit(X_train, y_train)
viz.score(X_test, y_test)
viz.show()

Multi-Label Classification
--------------------------
Expand Down
38 changes: 27 additions & 11 deletions yellowbrick/classifier/prcurve.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,12 +55,14 @@ class PrecisionRecallCurve(ClassificationScoreVisualizer):
Precision-Recall curves are a metric used to evaluate a classifier's quality,
particularly when classes are very imbalanced. The precision-recall curve
shows the tradeoff between precision, a measure of result relevancy, and
recall, a measure of how many relevant results are returned. A large area
under the curve represents both high recall and precision, the best case
scenario for a classifier, showing a model that returns accurate results
for the majority of classes it selects.
recall, a measure of completeness. For each class, precision is defined as
the ratio of true positives to the sum of true and false positives, and
recall is the ratio of true positives to the sum of true positives and false
negatives.

.. todo:: extend docstring
A large area under the curve represents both high recall and precision, the
best case scenario for a classifier, showing a model that returns accurate
results for the majority of classes it selects.

Parameters
----------
Expand Down Expand Up @@ -193,6 +195,15 @@ class PrecisionRecallCurve(ClassificationScoreVisualizer):

Notes
-----
To support multi-label classification, the estimator is wrapped in a
``OneVsRestClassifier`` to produce binary comparisons for each class
(e.g. the positive case is the class and the negative case is any other
class). The precision-recall curve can then be computed as the micro-average
of the precision and recall for all classes (by setting micro=True), or individual
curves can be plotted for each class (by setting per_class=True).

Note also that some parameters of this visualizer are learned on the ``score``
method, not only on ``fit``.

.. seealso:: https://bit.ly/2kOIeCC
"""
Expand Down Expand Up @@ -250,8 +261,8 @@ def __init__(

def fit(self, X, y=None):
"""
Fit the classification model; if y is multi-class, then the estimator
is adapted with a OneVsRestClassifier strategy, otherwise the estimator
Fit the classification model; if ``y`` is multi-class, then the estimator
is adapted with a ``OneVsRestClassifier`` strategy, otherwise the estimator
is fit directly.
"""
# The target determines what kind of estimator is fit
Expand Down Expand Up @@ -288,6 +299,7 @@ def score(self, X, y):
Average precision, a summary of the plot as a weighted mean of
precision at each threshold, weighted by the increase in recall from
the previous threshold.

"""
# If we don't do this check, then it is possible that OneVsRestClassifier
# has not correctly been fitted for multi-class targets.
Expand Down Expand Up @@ -501,10 +513,14 @@ def precision_recall_curve(
Precision-Recall curves are a metric used to evaluate a classifier's quality,
particularly when classes are very imbalanced. The precision-recall curve
shows the tradeoff between precision, a measure of result relevancy, and
recall, a measure of how many relevant results are returned. A large area
under the curve represents both high recall and precision, the best case
scenario for a classifier, showing a model that returns accurate results
for the majority of classes it selects.
recall, a measure of completeness. For each class, precision is defined as
the ratio of true positives to the sum of true and false positives, and
recall is the ratio of true positives to the sum of true positives and false
negatives.

A large area under the curve represents both high recall and precision, the
best case scenario for a classifier, showing a model that returns accurate
results for the majority of classes it selects.

Parameters
----------
Expand Down