Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precision, Recall and F1 Metrics Removed #5794

Closed
Lif3line opened this issue Mar 15, 2017 · 12 comments
Closed

Precision, Recall and F1 Metrics Removed #5794

Lif3line opened this issue Mar 15, 2017 · 12 comments

Comments

@Lif3line
Copy link

It appears Precision, Recall and F1 metrics have been removed from metrics.py as of today but I couldn't find any reference to their removal in the commit logs. Was this intentional?

@soldni
Copy link

soldni commented Mar 15, 2017

Yes, it was intentional. See https://github.com/fchollet/keras/wiki/Keras-2.0-release-notes

@Lif3line
Copy link
Author

Ah missed that, thank you very much.

@inexxt
Copy link
Contributor

inexxt commented Mar 19, 2017

What was the reason behind removing them?

@fchollet
Copy link
Member

fchollet commented Mar 19, 2017 via email

@karimpedia
Copy link
Contributor

@fchollet Are there any plans to add another implementation for these metrics (evaluated globally) ?

@iuria21
Copy link

iuria21 commented May 24, 2017

@karimpedia What I did was creating a Callback and calculate them on the end of each epoch with the validation data.

class Metrics(keras.callbacks.Callback):
    def on_epoch_end(self, batch, logs={}):
        predict = np.asarray(self.model.predict(self.validation_data[0]))
        targ = self.validation_data[1]
        self.f1s=f1(targ, predict)
        return
metrics = Metrics()
model.fit(X_train, y_train, epochs=epochs, batch_size=batch_size, validation_data=[X_test,y_test], 
       verbose=1, callbacks=[metrics])

And implement a function that calculates the f1 score or instead use Scikit Learn's Fscore function.

@ndor
Copy link

ndor commented Nov 2, 2017

@imanoluria, i was using your code (thanks for posting it BTW) with the sklearn's F1 in my model. the model has 3 inputs and one output.
i used two calbacks: callbacks=[checkpointer, metrics].
alas, i get the error:

  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1631, in fit
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1233, in _fit_loop
  File "build/bdist.linux-x86_64/egg/keras/callbacks.py", line 73, in on_epoch_end
  File "rafael.py", line 29, in on_epoch_end
    predict = np.asarray(self.model.predict(self.validation_data[0]))
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 1730, in predict
  File "build/bdist.linux-x86_64/egg/keras/engine/training.py", line 121, in _standardize_input_data
ValueError: The model expects 3  arrays, but only received one array. Found: array with shape (34374, 15, 9)

the fit portion:

    history = model.fit([T_train, S_train, P_train], y_train,
                        batch_size=size_batch,
                        nb_epoch=epochs,
                        verbose=1,
                        callbacks=[checkpointer, metrics],
                        class_weight=weights_dict,
                        validation_data=[[T_validation, S_validation, P_validation], y_validation],
                        shuffle=True)

my input to the validation is (first dim is #samples):
(34374, 15, 9)-temporal samples
(34374, 3)-1D vector samples
(34374, 7)-1D vector samples

my model's output is a one hot 25 category vector...

can you see what's the problem? thanks a lot!

@fchollet - will there be a future innate API for metrics in check pointer / similar method?

@ShiangYong
Copy link

Hi @basque21, I tried to extend your example to work with predict_generator but it did not work. Any ideas?

The error I got was AttributeError: 'Metrics' object has no attribute 'validation_data'.

My code snippet is below:

class Metrics(keras.callbacks.Callback):
    def on_epoch_end(self, batch, logs={}):
        predict = self.model.predict_generator(
            self.validation_data,
            steps=self.validation_steps,
            workers=6
        )
        targ = self.targ
        self.f1s=f1(targ, predict)
        return

@dakl
Copy link

dakl commented Apr 20, 2018

@ShiangYong Did you set validation_data when you fit()?

@gabrielam2018
Copy link

gabrielam2018 commented Apr 20, 2018 via email

@ShiangYong
Copy link

@dakl No, I need to use fit_generator or predict_generator for my applications, those expect generators and not validation_data

@arcticriki
Copy link

arcticriki commented Jun 25, 2018

If someone needs to implement it I suggest this work-around:

  • install keras_metrics package by ybubnov
  • call model.fit(nb_epoch=1, ...) inside a for loop taking advantage of the precision/recall metrics outputted after every epoch

Something like this:

for mini_batch in range(epochs):
        model_hist = model.fit(X_train, Y_train, batch_size=batch_size, epochs=1,
                            verbose=2, validation_data=(X_val, Y_val))

        precision = model_hist.history['val_precision'][0]
        recall = model_hist.history['val_recall'][0]
        f_score = (2.0 * precision * recall) / (precision + recall) ```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests