You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 15, 2020. It is now read-only.
Keras used to proposes those metrics before but they have been removed as they were approximated on batches. For more information see this issue.
Another package, keras-metrics proposes ready to use metrics for Keras, but it seems that there is a problem with models that get saved using model.save() — basically, metrics defined by this package aren't correctly saved/serialized in the .h5 file. See this issue.
As fchollet suggests, the best option might be to use a custom workflow (hence our second scenario).
This has been done ine #23.
Metrics from scikit-learn has been used for the evaluation, mainly accuracy_score, precision_score, recall_score, f1_score, confusion_matrix. More may be added in the future.
Accuracy is not sufficient to evaluate models. Different Metrics can be used, mainly:
This issue explore the evaluation process with metrics, there is two scenarios:
model.compile
andmodel.evaluate
do the jobcustom_metrics
argument when usingkeras.models.loadmodels
model.predict
.scikit-learn
proposes a bunch of metricsThe text was updated successfully, but these errors were encountered: