-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KerasClassifier.score is ... broken!? #38004
Comments
@FirefoxMetzger, |
Sure:
Minimal not working example to demonstrate the
Output:
The reason for this discrepancy is described in the opening comment. |
Was able to reproduce the issue with TF v2.0, TF v2.1 and TF-nightly. Please find the attached gist. Thanks! |
@FirefoxMetzger As mentioned in the error, I changed metric in the compile from Please close the issue if this was resolved for you. Thanks! |
You are correct. It does resolve the exception, as I mentioned in my issue. It does not solve the underlying issue though. It simply doesn't crash anymore and instead silently computes the wrong accuracy; arguably much worse, because easier to miss. Using
Both accuracy numbers should be identical. Both represent the fraction of the test dataset that has been assigned the correct label by the model, and both numbers use the same model and same test data. For reference, here is the code:
|
Please take a look at issue #38596, linked above. I believe this is the underlying issue for why the Keras |
@FirefoxMetzger, |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you. |
Hey @amahendrakar I checked out the comment, and it, unfortunately, doesn't resolve the issue. It really seems to be the naming of the desired metric in line
as I quoted in the opening post. If you check the keras metrics, you can find the metric with tensorflow/tensorflow/python/keras/metrics.py Line 662 in e24331b
whereas the desired metric is defined at tensorflow/tensorflow/python/keras/metrics.py Line 758 in e24331b
under the name categorical_accuracy .
That is what I believe to be the issue, unless I misunderstand how the wrapper is supposed to work. |
Was able to reproduce the issue with TF v2.2 and TF-nightly. Please find the attached gist. Thanks! |
Was able to reproduce your issue in Tf Nightly 2.6.0-dev20210524, please find the gist here. Thanks! |
Hi There, This is a stale issue. As you are using an older version of tensorflow, we are checking to see if you still need help on this issue. Please test the issue with the latest TensorFlow (TF2.7 and tf-nightly). If the issue still persists with the newer versions of TF, please feel free to open it in keras-team/keras repository by providing details about the issue and a standalone code to reproduce the issue. Thanks! Please note that Keras development has moved to a separate Keras-team/keras repository to focus entirely on only Keras. Thanks! |
I am using the scikit_learn wrapper to wrap a keras model and train / evaluate it in scikit learn. Calling
KerasClassifer.score
should return the accuracy of the classifier; however, no matter what I do, it just doesn't.Looking at the source the code does two things:
Sequential.evaluate
and then hopes to find a metric calledacc
oraccuracy
which it treats as the accuracy of the model. (lines 302 - 307)If it doesn't manage to find a named metric with the right name, it raises an exception.
I don't understand how this could possibly work (and it doesn't work for me). Given that the target labels are OneHot encoded the correct metric to use is
CategoricalAccuracy
; however, it is namedtensorflow/tensorflow/python/keras/metrics.py
Line 758 in e24331b
Logically
KerasClassifer.score
raises an exception. Worse, the error message suggests to add theAccuracy
metric to the model. This can be misleading, as it makes the error disappear and returns a value, but that value is not ... accurate (pun intended).I suggest renaming
accuracy
intensorflow/tensorflow/python/keras/wrappers/scikit_learn.py
Line 306 in e24331b
to
categorical_accuracy
, and, while at it, I suggest to add_estimator_type = "classifier"
as a class variable. Scikit learn checks for it to identifyKerasClassifier
as a classifier, and without it a lot of functionality doesn't work as intended.If there is agreement for this change I can submit a PR.
The text was updated successfully, but these errors were encountered: