Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tuning and stacking at the same time ? #1266

Closed
eddelbuettel opened this issue Oct 4, 2016 · 12 comments
Closed

Tuning and stacking at the same time ? #1266

eddelbuettel opened this issue Oct 4, 2016 · 12 comments

Comments

@eddelbuettel
Copy link

Is there a tutorial example which combines tuning (which I have working) and stacking (which I have working too). But somehow I don't see (yet) how to "fuse" both approaches.

@schiffner
Copy link
Contributor

schiffner commented Oct 4, 2016

Unfortunately not at the moment.
(But if I remember correctly one of Bernd's student is working on improving stacking and extending the tutorial and it should be up in the near future.)

Here is an example for tuning directly via tuneParams and for tuning within nested resampling:

library(mlr)
tsk = makeClassifTask(data = iris, target = "Species")
base = c("classif.rpart", "classif.lda", "classif.svm")
lrns = lapply(base, makeLearner)
lrns = lapply(lrns, setPredictType, "prob")
m = makeStackedLearner(base.learners = lrns,
  predict.type = "prob", method = "hill.climb")

getParamSet(m)

ps = makeParamSet(
  makeDiscreteParam("classif.svm.cost", c(0.01, 0.1))
)

## tuning
tuneParams(m, tsk, resampling = makeResampleDesc("Holdout"), par.set = ps, control = makeTuneControlGrid())

## nested resampling
m2 = makeTuneWrapper(m, resampling = makeResampleDesc("Holdout"), par.set = ps, control = makeTuneControlGrid())
holdout(m2, tsk)

@eddelbuettel
Copy link
Author

Thank you, that is a very nice start.

@bhvieira
Copy link
Contributor

bhvieira commented Oct 4, 2016

@schiffner sorry for hijacking, but you could use makeLearners(c("classif.rpart", "classif.lda", "classif.svm"), predict.type = "prob") to one-line the whole thing.

@schiffner
Copy link
Contributor

@catastrophic-failure : No problem, thanks. I was being lazy and just copied the example from ?makeStackedLearner. You are of course right that we could shorten it a bit.

@eddelbuettel
Copy link
Author

Fails in the CRAN version though -- works in the unreleased 2.10.

@schiffner
Copy link
Contributor

Yep. Sorry, what I wrote above was misleading. makeLearners is a new function in 2.10.

I will close this issue. If there are any questions please feel free to reopen.

@eddelbuettel
Copy link
Author

@schiffner One follow-up question, if I may.

When I stack and tune the runs are over the cross product of all parameters, across models. That seems wasteful. If I have a method A with three values, I need three runs. Add a method B with three values, so I need three more. When I follow what you kindly outlined above, I end up with 3 x 3 = 9.

Should models be tuned individually before a stacking of 'locally best' models is attempted?

@schiffner
Copy link
Contributor

That's a good question... I don't really know.

My thoughts are:

  • Intuitively, since one wants the ensemble to work well, it appears reasonable to me to tune it as a whole and "live" with the cross-product, or if this gets cumbersome use a different tuning method. Unfortunately, I don't have much practical experience with stacking.
  • Don't tune and use different hyperparameter settings to increase the diversity?
  • Tuning the base learners individually instead of the whole cross product is possible by using TuneWrappers as base learners in stacking, along these lines:
library(mlr)
ps1 = makeParamSet(
  makeDiscreteParam("cp", c(0.01, 0.1))
)
lrn1 = makeTuneWrapper("classif.rpart", resampling = makeResampleDesc("Holdout"), par.set = ps1, control = makeTuneControlGrid())

ps2 = makeParamSet(
  makeDiscreteParam("cost", c(0.01, 0.1))
)
lrn2 = makeTuneWrapper("classif.svm", resampling = makeResampleDesc("Holdout"), par.set = ps2, control = makeTuneControlGrid())

lrns = list(lrn1, lrn2)
lrns = lapply(lrns, setPredictType, "prob")

lrn = makeStackedLearner(base.learners = lrns,
  predict.type = "prob", method = "average")

m = train(lrn, iris.task)

@giuseppec , @berndbischl : Any insights?

@schiffner schiffner reopened this Oct 5, 2016
@giuseppec
Copy link
Contributor

Hi,

@schiffner already mentioned the most important facts. One thing to add: When you do stacking using a superlearner, tuning the superlearner does not seem to be possible (see #697).
We have a completely refactored several stacking methods here #1041, this still needs to be reviewed. Will try to do this for mlr 2.11.

@eddelbuettel
Copy link
Author

Thanks for the clarification. @giuseppec.

@SteveBronder
Copy link
Contributor

@giuseppec @eddelbuettel

When you do stacking using a superlearner, tuning the superlearner does not seem to be possible

On my fork I believe I am tuning a stacked learner's super learner.

It seems that you have to wrap the stacked learner in makeTuneWrapper(), then after you do the tuning, you set all of the parameters with setHyperPars2()

Example an be found here, though it's for my fork for forecasting so it won't work with the devel version of mlr. (yet!)
https://github.com/Stevo15025/mlr#ensembles-of-forecasts

Also, I have no idea if the resampling schemes are being implimented in the correct order / fashion. But nonetheless I get a tuned super learner.

@stale
Copy link

stale bot commented Dec 19, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Dec 19, 2019
@stale stale bot closed this as completed Dec 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants