Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix bug in TuneWrapper for param in used in predict function, issue 2472 #2479

Merged
merged 11 commits into from Apr 15, 2019

Conversation

4 participants
@berndbischl
Copy link
Contributor

commented Nov 9, 2018

fixes issue #2472

unit tests are missing. please help and merge

predictLearner(lrn, .model$learner.model$next.model, .newdata, ...)
arglist = list(.learner = lrn, .model = .model$learner.model$next.model, .newdata = .newdata)
arglist = insert(arglist, list(...))
arglist = insert(arglist, .model$learner.model$opt.result$x)

This comment has been minimized.

Copy link
@mb706

mb706 Nov 9, 2018

Contributor

Should probably only insert the parameters that have when %in% c("predict", "both").

Maybe one could just copy the lines from predict.R, if we are sure there are no other wrappers that introduce a discrepancy between the par.vals of a learner and the ... parameters on purpose.

@jakob-r jakob-r self-assigned this Nov 12, 2018

@jakob-r

This comment has been minimized.

Copy link
Member

commented Nov 12, 2018

Some notes regarding my observation:

  • setHyperPars within predictLearner is without effect because they will never be requested. The hyperparams got extracted in predict.(WrappedModel) and are put into ....
  • We don't know if the par.vals got changed after the training of the TuneWrapped Learner.
  • We will always overwrite the existing par.vals with the ones from the learner.
  • We assume that the user does not change par.vals and between training and predict. This holds especially as in most cases both is done internally in one step.
@mb706

This comment has been minimized.

Copy link
Contributor

commented Nov 12, 2018

setHyperPars within predictLearner is without effect because they will never be requested. The hyperparams got extracted in predict.(WrappedModel) and are put into ....

we should probably not rely on this; right now the parameter values are given both inside the ... and as part of the .learner object; its probably good to make sure they agree.

We don't know if the par.vals got changed after the training of the TuneWrapped Learner.

How would that happen (assuming the user does not directly modify the model$learner object, in which case, too bad)?

We will always overwrite the existing par.vals with the ones from the learner.

You mean the ones being tuned over? It might be worth considering to remove the parameters found in the tuning param set from the wrapped learner's param set, so the user gets an error message when he tries to set a param that will be overwritten later.

We assume that the user does not change par.vals and between training and predict. This holds especially as in most cases both is done internally in one step.

I'm confused again, how would that happen? Does any part of mlr support that?

@jakob-r

This comment has been minimized.

Copy link
Member

commented Nov 12, 2018

setHyperPars within predictLearner is without effect because they will never be requested. The hyperparams got extracted in predict.(WrappedModel) and are put into ....

we should probably not rely on this; right now the parameter values are given both inside the ... and as part of the .learner object; its probably good to make sure they agree.

That is the architecture of mlr at the moment and we are relying on it everywhere. In train and predict the parameters are read and put into the arguments.
This is probably not the best design but keeping the arguments and the par.vals "in sync" would only be a lot of effort.

We don't know if the par.vals got changed after the training of the TuneWrapped Learner.

How would that happen (assuming the user does not directly modify the model$learner object, in which case, too bad)?

Like that, yes., ? mod = train(...); setHyperPars(mod$learner, ...); predict(mod, ...)

We will always overwrite the existing par.vals with the ones from the learner.

You mean the ones being tuned over? It might be worth considering to remove the parameters found in the tuning param set from the wrapped learner's param set, so the user gets an error message when he tries to set a param that will be overwritten later.

A warning somewhere might be useful. But I am afraid we would generate warnings for cases where we have set the par.vals in the definition of the learner. This can entail a lot of work.

We assume that the user does not change par.vals and between training and predict. This holds especially as in most cases both is done internally in one step.

I'm confused again, how would that happen? Does any part of mlr support that?
See above.

My suggestion: We should merge this PR asap and can add convenience later.

@mb706

This comment has been minimized.

Copy link
Contributor

commented Nov 12, 2018

That is the architecture of mlr at the moment and we are relying on it everywhere. In train and predict the parameters are read and put into the arguments.

Do you consider this to be a bug?

mod = train(...); setHyperPars(mod$learner, ...); predict(mod, ...)

has "model$learner <- [something]" ever been documented as supported behaviour? If the user modifies internals of objects then of course he can break things without bounds, I thought of mod$learner as such an internal.

@berndbischl

This comment has been minimized.

Copy link
Contributor Author

commented Nov 12, 2018

i do not understand the unit test. can you please add at least some comments?
how the test is setup and why you expect certain things?

@berndbischl

This comment has been minimized.

Copy link
Contributor Author

commented Nov 12, 2018

also please note: a PR for fixing a certain bug is probably not the best place to dicuss general architecture questions. please move this to a clean issue if you want to discuss this...

@jakob-r

This comment has been minimized.

Copy link
Member

commented Nov 12, 2018

Do you consider this to be a bug?

I would not call it a bug as it's just affecting some error message. But the whole code is strange. How do we know that it's really the wrong family that leads to NAs? But this is another issue and should not be discussed here.

i do not understand the unit test.

Better now?

@jakob-r

This comment has been minimized.

Copy link
Member

commented Nov 13, 2018

@@ -74,8 +74,16 @@ trainLearner.TuneWrapper = function(.learner, .task, .subset = NULL, ...) {

#' @export
predictLearner.TuneWrapper = function(.learner, .model, .newdata, ...) {
lrn = setHyperPars(.learner$next.learner, par.vals = .model$learner.model$opt.result$x)

This comment has been minimized.

Copy link
@mb706

mb706 Nov 13, 2018

Contributor

FYI I am relatively sure that removing this line (and not putting .learner = lrn further down etc) will break things.

This comment has been minimized.

Copy link
@jakob-r

jakob-r Nov 13, 2018

Member

you mean .learner$next.learner? That moved to another line. But I added setHyperPars now again.

This comment has been minimized.

Copy link
@mb706

mb706 Nov 13, 2018

Contributor

2a0d9d0 is exactly what I meant, with .learner = lrn I meant the list entry ".learner" 👍

jakob-r and others added some commits Nov 13, 2018

@pat-s

This comment has been minimized.

Copy link
Member

commented Apr 11, 2019

@jakob-r If the build passes, can this be merged? I would add an entry to NEWS.md

pat-s added some commits Apr 14, 2019

@pat-s

pat-s approved these changes Apr 15, 2019

@pat-s pat-s merged commit 7ea4a57 into master Apr 15, 2019

1 of 3 checks passed

continuous-integration/travis-ci/pr The Travis CI build is in progress
Details
continuous-integration/travis-ci/push The Travis CI build is in progress
Details
deploy/netlify Deploy preview ready!
Details

@pat-s pat-s deleted the fix_2472 branch Apr 15, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.