Skip to content

Commit

Permalink
Remove batchwise metrics
Browse files Browse the repository at this point in the history
  • Loading branch information
fchollet committed Jan 18, 2017
1 parent 1c630c3 commit a56b1a5
Showing 1 changed file with 2 additions and 96 deletions.
98 changes: 2 additions & 96 deletions keras/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,110 +74,16 @@ def poisson(y_true, y_pred):
def cosine_proximity(y_true, y_pred):
y_true = K.l2_normalize(y_true, axis=-1)
y_pred = K.l2_normalize(y_pred, axis=-1)
return -K.mean(y_true * y_pred)
return - K.mean(y_true * y_pred)


def matthews_correlation(y_true, y_pred):
"""Matthews correlation metric.
# Aliases

It is only computed as a batch-wise average, not globally.
Computes the Matthews correlation coefficient measure for quality
of binary classification problems.
"""
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos

y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos

tp = K.sum(y_pos * y_pred_pos)
tn = K.sum(y_neg * y_pred_neg)

fp = K.sum(y_neg * y_pred_pos)
fn = K.sum(y_pos * y_pred_neg)

numerator = (tp * tn - fp * fn)
denominator = K.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))

return numerator / (denominator + K.epsilon())


def precision(y_true, y_pred):
"""Precision metric.
Only computes a batch-wise average of precision.
Computes the precision, a metric for multi-label classification of
how many selected items are relevant.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision


def recall(y_true, y_pred):
"""Recall metric.
Only computes a batch-wise average of recall.
Computes the recall, a metric for multi-label classification of
how many relevant items are selected.
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall


def fbeta_score(y_true, y_pred, beta=1):
"""Computes the F score.
The F score is the weighted harmonic mean of precision and recall.
Here it is only computed as a batch-wise average, not globally.
This is useful for multi-label classification, where input samples can be
classified as sets of labels. By only using accuracy (precision) a model
would achieve a perfect score by simply assigning every class to every
input. In order to avoid this, a metric should penalize incorrect class
assignments as well (recall). The F-beta score (ranged from 0.0 to 1.0)
computes this, as a weighted mean of the proportion of correct class
assignments vs. the proportion of incorrect class assignments.
With beta = 1, this is equivalent to a F-measure. With beta < 1, assigning
correct classes becomes more important, and with beta > 1 the metric is
instead weighted towards penalizing incorrect class assignments.
"""
if beta < 0:
raise ValueError('The lowest choosable beta is zero (only precision).')

# If there are no true positives, fix the F score at 0 like sklearn.
if K.sum(K.round(K.clip(y_true, 0, 1))) == 0:
return 0

p = precision(y_true, y_pred)
r = recall(y_true, y_pred)
bb = beta ** 2
fbeta_score = (1 + bb) * (p * r) / (bb * p + r + K.epsilon())
return fbeta_score


def fmeasure(y_true, y_pred):
"""Computes the f-measure, the harmonic mean of precision and recall.
Here it is only computed as a batch-wise average, not globally.
"""
return fbeta_score(y_true, y_pred, beta=1)


# aliases
mse = MSE = mean_squared_error
mae = MAE = mean_absolute_error
mape = MAPE = mean_absolute_percentage_error
msle = MSLE = mean_squared_logarithmic_error
cosine = cosine_proximity
fscore = f1score = fmeasure


def get(identifier):
Expand Down

6 comments on commit a56b1a5

@samarth-b
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can someone explain why fmeasure was removed and if there are any alternatives?
Thanks

@andremann
Copy link

@andremann andremann commented on a56b1a5 Apr 5, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here. I've been scratching my head a while after having updated to 2.0.1
I tried quickly to use support functions from sklearn with no success (cast problem from tensors to lists); I also tried to refer symbolic functions from TF with the same result... (I am a noob)
For the time being I rolled back to 1.2.2

@karimpedia
Copy link
Contributor

@karimpedia karimpedia commented on a56b1a5 Apr 21, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #5794

@rimjhim365
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my case precision, recall values are coming same as accuracy. Any suggestions ??

@dgrahn
Copy link

@dgrahn dgrahn commented on a56b1a5 Oct 25, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everyone looking for an easy way to use these metrics. Here's the gist: https://gist.github.com/dgrahn/f68447e6cc83989c51617571396020f9

@goodskillprogramer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added these two line code in front of the function.
y_true = y_true[:,1]
y_pred = y_pred[:,1]
It sliced the y_true and y_pred tensor .
It seems works .

Please sign in to comment.