Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple minor bug fixes and improvements #392

Merged
merged 13 commits into from
Nov 23, 2017

Conversation

desilinguist
Copy link
Member

This is another short PR that encompasses several minor bugfixes and improvements:

  1. Update to scikit-learn v0.19.1. In this particular scikit-learn release, the stratified split functions fail if the labels are floating point so we need to catch that condition in SKLL to provide a more informative message. Added such a condition to learner.cross_validate() and a test.

  2. Deal with empty feature files by raising an exception. Added a test and also fixed an unbound variable bug.

  3. Add neg_log_loss as a possible metric and tuning objective for classifiers. This requires probability to be True so add a check for that and a test.

  4. If we are just specifying a train_file and a test_file, there are no featuresets and hence it doesn't make any sense to do ablation in that case. Added a warning and a fall-through to handle this scenario.

  5. Updated documentation for all of the above as appropriate.

- In scikit-learn v0.19.1, the stratified split functions fail if the labels are floating point so we need to catch that condition in SKLL to provide a more informative message.
- Initialize possibly unbound variable and add an error check.
- Add a check to make sure probability is true if it's used as the objective.
- Add a test.
- If we have a single featureset of length 1, then we cannot ablate anything.
@coveralls
Copy link

coveralls commented Nov 13, 2017

Coverage Status

Coverage decreased (-0.04%) to 92.296% when pulling 827afa3 on multiple-bugfixes-improvements into 87b0717 on master.

skll/config.py Outdated
@@ -654,6 +654,12 @@ def _parse_config_file(config_path, log_level=logging.INFO):
output_metrics = [metric for metric in output_metrics
if metric not in common_metrics_and_objectives]

# if the grid objectives contains `neg_log_loss`, then probability
# must be specified as true since that' needed to compute the loss
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that' --> that's

@coveralls
Copy link

coveralls commented Nov 14, 2017

Coverage Status

Coverage increased (+0.03%) to 92.371% when pulling 655648a on multiple-bugfixes-improvements into 87b0717 on master.

@desilinguist
Copy link
Member Author

I fixed the coverage issue. This PR is now ready to be reviewed :)

@aoifecahill
Copy link
Collaborator

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants