-
Notifications
You must be signed in to change notification settings - Fork 19.4k
-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I get both test accuracy and validation accuracy for each epoch #2548
Comments
You can train the model with only one epoch and evaluate it. Then use a for loop to repeat this procedure for several epochs. However, doing test in each epoch doesn't like what we should do when training a model, test should be only done after you finished training. |
Well, I wrote a callback for this purpose. class TestCallback(Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_epoch_end(self, epoch, logs={}):
x, y = self.test_data
loss, acc = self.model.evaluate(x, y, verbose=0)
print('\nTesting loss: {}, acc: {}\n'.format(loss, acc)) Then, you can call model.fit(X_train, Y_train, validation_data=(X_val, Y_val),
callbacks=[TestCallback((X_test, Y_test))]) However, since there is no
|
Having an |
Edit: but passing in |
E.g. tmp = Sequential()
tmp.add(Dense(20, input_shape=(10,)))
tmp.compile(optimizer='adadelta', loss='mse', metrics=['mse', 'accuracy'])
tmp.evaluate(np.asarray([np.zeros((10))]), np.asarray([np.zeros((20))])) yields
..which is [loss, mse, accuracy] as a list. |
|
can anyone help me, pleae? the evaluation function in keras has a very bad performance although the result of training and validation is very good. what is the problem ? |
@manallllll This is a question better suited for stackoverflow but you should always post code if you want help |
@joelthchao i defined my custom callback as per your code but its showing an error that too many values to unpack at line |
Your method works, however, Keras is not logging test results in log file, its doing that for validation set but not for test set |
@joelthchao is 0.9319 the testing accuracy or the validation accuracy? Notice that |
Note: logging is still broken, but as also stated in keras-team/keras#2548 (comment), the Test Callback from keras-team/keras#2548 (comment) doe s not work: when the `evaluate()` method is called in a `on_epoch_end` callback, the validation datasets is always used.
@ KhawYewOnn were you able to get around the issue where self.model.evaluate(x, y) is using the validation data instead of test data? |
Based on this I made that: class TestCallback(callbacks.Callback):
'''
Outputs the metrics of the test sample at the end of each epoch.
'''
def __init__(self, *args, **kwargs):
self.args = args
self.kwargs = kwargs | {'verbose': 0} # Verbose is disable
def on_epoch_end(self, epoch, logs={}):
metrics_values = self.model.evaluate(*self.args, **self.kwargs)
print('\r\033[1mTest\033[0m', end='') # The word "Test" is highlighted in bold
for key, val in zip(self.model.metrics_names, metrics_values):
print(' - test_%s: %.4f' % (key, val), end='1')
print(' ' * 1000) # To rewrite the entire last line` Not the most beautiful solution, but more general. It looks like that:
Yes, test metrics are written before the rest. But I didn't figure out how to fix it. |
Hi, everyone.
I can use model.evaluate() to calculate the test accuracy for the last epoch, but how can I get both test accuracy and validation for each epoch?
The text was updated successfully, but these errors were encountered: