Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predict phase parameter optimization #212

Open
mb706 opened this issue Dec 17, 2019 · 3 comments
Open

Predict phase parameter optimization #212

mb706 opened this issue Dec 17, 2019 · 3 comments

Comments

@mb706
Copy link
Collaborator

mb706 commented Dec 17, 2019

It should be possible to perform efficient optimization of predict phase parameters, maybe even simultaneously with (ordinary) train time parameters so that predict phase parameters are optimized in an inner loop apart from train time parameters.

This was our resolution for #50 (see #50 (comment)) but apparently we don't have an issue mentioning this specifically?

@berndbischl
Copy link
Sponsor Member

can we add a simple example / task here so that we can work against that pls?

@mb706
Copy link
Collaborator Author

mb706 commented Dec 17, 2019

We want to tune

library("mlr3learners")
ll = lrn("classif.glmnet")

on one of the paramsets

ps1 = ParamSet$new(list(
  ParamFct$new("s", levels = c("lambda.1se", "lambda.min"))
))

ps2 = ParamSet$new(list(
  ParamFct$new("s", levels = c("lambda.1se", "lambda.min")),
  ParamDbl$new("alpha", lower = 0, upper = 1)
))
  1. Currently when we tune ps1 we perform both training and prediction. This may be desirable when the Learner or the resampling is stochastic in some way.
  2. The tuning machinery knows that parameter "s" is a tags = "predict" parameter, so repeated model fits should not be necessary when tuning over ps1. There should be a way to prevent repeated train() calls and to just use the same model for different values of "s".
  3. We may or may not want to support tuning ps2 with all the predict-time parameters tuned over separately (in an inner loop) from the train-time parameters. We could also forbid doing this and insist that predict-time tuning can only happen if the whole search space happens at predict-time. Then the user would have to set up an AutoTuner for the "s" parameter, and tune that one with another tuner that tunes over "alpha". This would probably be the simplest to implement, but in that case it would be nice to have some convenience functions.

@mb706
Copy link
Collaborator Author

mb706 commented Jan 15, 2020

Note to myself: nesting autotuners behaves differently from doing two different opt methods (for train and predict) simultaneously, because the one does nested resampling, the other only has one resampling level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants