-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check if there is already old models that have been evaluated Fixes #540 #541
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, BTW, the commit is messaged, "non-functioning check;" and, I'm not clear how this resolves #540…?
Per @jesteria 's comment, this doesn't appear to be complete. As it stands, besides a test, there are some changes that I think are important even to what is being sketched out:
|
Codecov Report
@@ Coverage Diff @@
## master #541 +/- ##
==========================================
+ Coverage 82.63% 82.69% +0.05%
==========================================
Files 82 82
Lines 4636 4651 +15
==========================================
+ Hits 3831 3846 +15
Misses 805 805
Continue to review full report at Codecov.
|
I added some test cases for a ModelEvaluator.needs_evaluations method to this branch that can serve as a guide to finishing this feature |
- Add ModelEvaluator.needs_evaluation that checks whether or not any configured evaluation metrics/parameters are missing in the database - Call ModelEvaluator.needs_evaluation from ModelTester and skip both prediction and evaluation if none is needed.
a8539c9
to
9a16646
Compare
When rerunning an experiment. This patch checks if the model_id has an entry in the evaluations table. If so, then it skips the evaluation saving time on longer model runs.
NOTE: If the type of evaluation parameters are different than the original model run the evaluation will still be skipped.