Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EarlyStopping is ignoring my custom metrics defined #10018

Closed
Libardo1 opened this issue Apr 23, 2018 · 2 comments
Closed

EarlyStopping is ignoring my custom metrics defined #10018

Libardo1 opened this issue Apr 23, 2018 · 2 comments

Comments

@Libardo1
Copy link

Hi there, I am trying to classify Credit Card Fraud with a nn Keras model.
Because the dataset is imbalanced, I need to use f1_score to improve the recall.

Apparently, is not accepting the f1s definition.
How to monitor my new metrics in each epoch? The early stopping works fine if with val_loss but not with the defined ones
I would appreciate your help to solve my issue.

df = pd.read_csv('creditcard.csv')
X = df.iloc[:, :-1].values
y = df.iloc[:, -1].values
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=1)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
import keras
import numpy as np
import sklearn.metrics as sklm

class Metrics(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.confusion = []
self.precision = []
self.recall = []
self.f1s = []
self.kappa = []
self.auc = []

def on_epoch_end(self, epoch, logs={}):
    score = np.asarray(self.model.predict(self.validation_data[0]))
    predict = np.round(np.asarray(self.model.predict(self.validation_data[0])))
    targ = self.validation_data[1]

    self.auc.append(sklm.roc_auc_score(targ, score))
    self.confusion.append(sklm.confusion_matrix(targ, predict))
    self.precision.append(sklm.precision_score(targ, predict))
    self.recall.append(sklm.recall_score(targ, predict))
    self.f1s.append(sklm.f1_score(targ, predict))
    self.kappa.append(sklm.cohen_kappa_score(targ, predict))
    return

metrics = Metrics()

clf51 = Sequential([
Dense(units=128, kernel_initializer='uniform', kernel_regularizer=regularizers.l2(0.05), input_dim=30, activation= 'relu', name='layer1_In'),
Dropout(0.5),
Dense(units=128, kernel_initializer='uniform', activation='relu', name='layer2'),
Dropout(0.5),
Dense(128, kernel_initializer='uniform', kernel_regularizer=regularizers.l2(0.03), activation='relu', name='layer3'),
Dropout(0.5),
Dense(1, kernel_initializer='uniform', activation='sigmoid', name='layer6_Out')
])

Define callbacks

baselogger = BaseLogger()

earlystop = EarlyStopping(monitor='f1s', min_delta=1e-4, patience=5, verbose=0, mode='max')

reduce_lr = ReduceLROnPlateau(monitor='recall', factor=0.2, patience=5, min_lr=0.001)

setting up the optimization of our weights and compile

sgd51 = SGD(lr=0.00825, decay=1e-6, momentum=0.9, nesterov=True)
clf51.compile(optimizer=sgd51, loss='binary_crossentropy', metrics=["accuracy"])

with tf.Session(config=tf.ConfigProto(
intra_op_parallelism_threads=12)) as sess:
K.set_session(sess)
clf51.fit(X_train, Y_train, batch_size=384, epochs=10, callbacks=[earlystop, metrics], validation_split=0.30, verbose=2)
score = clf51.evaluate(X_test, Y_test, batch_size=128, verbose=1)
y_pred = clf51.predict(X_test)
checkpoint = ModelCheckpoint('model_CLF51.hdf5', save_best_only=True, monitor='f1s', mode='max')

I receive this mesagge:
Train on 139554 samples, validate on 59810 samples
Epoch 1/10

  • 7s - loss: 0.3585 - acc: 0.9887 - val_loss: 0.0560 - val_acc: 0.9989
    /home/libardo/anaconda3/lib/python3.6/site-packages/keras/callbacks.py:526: RuntimeWarning: Early stopping conditioned on metric f1s which is not available. Available metrics are: val_loss,val_acc,loss,acc
    (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
@Dref360
Copy link
Contributor

Dref360 commented Apr 24, 2018

That's not how you define a metric. You defined a Callback which is not the same thing.

This issue isn't related to a bug/enhancement/feature request or other accepted types of issue.

To ask questions, please see the following resources :

Thanks!

If you think I made a mistake, please re-open this issue.

@Dref360 Dref360 closed this as completed Apr 24, 2018
@mmalekzadeh
Copy link

mmalekzadeh commented Sep 14, 2018

I had the same problem. It's solved like this.
I'm using a custom metric:

#https://stackoverflow.com/a/45305384/5210098
def f1_metric(y_true, y_pred):
    def recall(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
        recall = true_positives / (possible_positives + K.epsilon())
        return recall

    def precision(y_true, y_pred):
        true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
        predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
        precision = true_positives / (predicted_positives + K.epsilon())
        return precision
    precision = precision(y_true, y_pred)
    recall = recall(y_true, y_pred)
    return 2*((precision*recall)/(precision+recall+K.epsilon()))
 

and use EarlyStopping like this:

early_stop = keras.callbacks.EarlyStopping(monitor='val_f1_metric', patience = 5)

and compile with the new metric:

model.compile( loss="categorical_crossentropy", optimizer='adam', metrics=['acc', f1_metric])
    print("Model Size = "+str(eval_act.count_params()))

and put it in callbacks of fit function. The output is fine:

Epoch 5/50
 - 28s - loss: 0.5225 - acc: 0.9530 - f1_metric: 0.9533 - val_loss: 0.4911 - val_acc: 0.9401 - val_f1_metric: 0.9403

But it never goes further 6 epochs, because the patience is 5, eventhough there still was progress to achieve.

I've added mode:

EarlyStopping(monitor='val_f1_metric', mode='max', patience = 5)

and it works well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants