New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug in metric computation when there is overlap with grid objective #567
Conversation
- Move it from the config parsing stage to `Learner.evaluate()` since the issue is basically just of preventing duplicate computation of the metric at the experiment level which happens here.
- Use input files that actually exist. - Use a better config template.
Hello @desilinguist! Thanks for updating this PR.
Comment last updated at 2019-10-22 00:57:35 UTC |
Codecov Report
@@ Coverage Diff @@
## master #567 +/- ##
==========================================
+ Coverage 91.69% 95.02% +3.32%
==========================================
Files 20 20
Lines 2975 2972 -3
==========================================
+ Hits 2728 2824 +96
+ Misses 247 148 -99
Continue to review full report at Codecov.
|
# Conflicts: # tests/test_classification.py
Co-Authored-By: Aoife Cahill <acahill@ets.org>
This PR addresses #564.
config.py
) to the experiment level (inlearner.py
).test_input.py
test_classification.py
.