New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove default value of objective function and objective
config field.
#458
Conversation
- Remove code for normalizing and error checking the two since there is now only one field. - Add an additional check that `objectives` is not empty when running an experiment.
- Make default value of `grid_objective` to be `None` for `train()` and `cross_validate()` and add a check in `train()` that raises an exception if we are doing grid search and the objective is not specified. - Make `metric` a required parameter for `learning_curve()`. - flake8 fixes.
- No longer need test for the removed `objective` field. - Fix other tests that were using `objective` to now use `objectives`. - flake8 fixes
- Since there's no longer a default value. - flake8 fixes.
- We only need to check this for specific tasks.
- To check that missing objectives is fine for this task.
- Since there are no default objectives now.
- We don't want the objective in the name of the output files if (a) it was not specified to begin with or (b) if there was only one objective since then it's obvious which file is which.
- It will not be available if we aren't doing grid search
- This would prevent unnecesary computation in cases when grid search was false but the user still specified a list of objectives.
- Specify `grid_objective` explicitly when calling `train()`. - Some flake8 fixes
- For some reason the same file was being appended to.
Something going on with Travis/Coveralls integration again. Re-running builds. |
Yeah, I don't really know why coveralls is reporting 3 new lines as being uncovered. I don't see an issue since that function is called multiple times in learning curve tests. |
Even going back to the last build with respect to |
yeah that's what I was thinking as well @mulhod 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! I peeked into the coverage "issue" also and, while I suspect the reason the uncovered function is uncovered due to parallelization, I could not figure out why Coveralls would suddenly be claiming 3 lines as newly uncovered. ¯\(ツ)/¯
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me! Two extremely minor comments.
objective
field and remove default values for `objectives.objectives
is not empty when running an experiment.grid_objective
to beNone
fortrain()
andcross_validate()
and add a check intrain()
that raises an exception if we are doing grid search and the objective is not specified. We don't need a similar check forcross_validate()
since that callstrain()
underlyingly anyway.metric
a required parameter forlearning_curve()
.objectives
and remove unnecessary tests e.g., for the removedobjective
field.objective
to now useobjectives
, explicitly specify functions where needed, and explicitly specify that we are doing grid search.score
to not use that since that is the objective function value on the test set and will not be computed when there is no such function or when we aren't doing grid search.grid_objectives
if we aren't doing grid search since this would prevent unnecessary computation when grid search isFalse
but the user still specified a list of objectives.Please test this quite throughly as this is a major backwards incompatible change.