Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added validation split, compute validation loss in training #15

Merged
merged 1 commit into from Feb 23, 2017

Conversation

dribnet
Copy link
Contributor

@dribnet dribnet commented Feb 23, 2017

Quick attempt to add a validation split and compute the validation loss when training. The general idea is to try to get a sense if the model starts to overfit to the data it has seen. This also provides a more deterministic metric is choosing hyperparamaters across runs.

This could probably be cleaned up and I'm not sure if this follows tensorflow idioms, etc. But it seems to be working in the current form. I'm curious to get feedback if others think this change could be a useful metric during training.

@hardmaru
Copy link
Owner

Thx!

@hardmaru hardmaru merged commit 3d3114f into hardmaru:master Feb 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants