Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug in metric computation when there is overlap with grid objective #567

Merged
merged 7 commits into from Oct 22, 2019

Conversation

desilinguist
Copy link
Member

@desilinguist desilinguist commented Oct 20, 2019

This PR addresses #564.

  • Moves the overlap checking from the configuration level (in config.py) to the experiment level (in learner.py).
  • Adds another check for code added in the earlier metric computation PR: the message about probabilities being used to compute the metric is now not shown when computing the objective since doing so might send conflicting messages to the user given the overlap message.
  • Remove old test for overlap checking from test_input.py
  • Add new test for overlap checking to test_classification.py.

- Move it from the config parsing stage to `Learner.evaluate()` since the issue is basically just of preventing duplicate computation of the metric at the experiment level which happens here.
- Use input files that actually exist.
- Use a better config template.
@pep8speaks
Copy link

pep8speaks commented Oct 20, 2019

Hello @desilinguist! Thanks for updating this PR.

Line 1739:101: E501 line too long (101 > 100 characters)

Comment last updated at 2019-10-22 00:57:35 UTC

@codecov
Copy link

codecov bot commented Oct 20, 2019

Codecov Report

Merging #567 into master will increase coverage by 3.32%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #567      +/-   ##
==========================================
+ Coverage   91.69%   95.02%   +3.32%     
==========================================
  Files          20       20              
  Lines        2975     2972       -3     
==========================================
+ Hits         2728     2824      +96     
+ Misses        247      148      -99
Impacted Files Coverage Δ
skll/config.py 95.12% <ø> (+1.18%) ⬆️
skll/learner.py 96% <100%> (+7.28%) ⬆️
skll/metrics.py 97.14% <0%> (+2.85%) ⬆️
skll/experiments.py 95.52% <0%> (+5.8%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e6da3ca...ca539b1. Read the comment docs.

# Conflicts:
#	tests/test_classification.py
tests/test_classification.py Outdated Show resolved Hide resolved
tests/test_classification.py Outdated Show resolved Hide resolved
tests/test_classification.py Outdated Show resolved Hide resolved
tests/test_classification.py Outdated Show resolved Hide resolved
Co-Authored-By: Aoife Cahill <acahill@ets.org>
@desilinguist desilinguist merged commit 1e8fb90 into master Oct 22, 2019
@delete-merged-branch delete-merged-branch bot deleted the 564-metric-computation-bug branch October 22, 2019 00:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants