-
-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tuning and stacking at the same time ? #1266
Comments
Unfortunately not at the moment. Here is an example for tuning directly via
|
Thank you, that is a very nice start. |
@schiffner sorry for hijacking, but you could use |
@catastrophic-failure : No problem, thanks. I was being lazy and just copied the example from |
Fails in the CRAN version though -- works in the unreleased 2.10. |
Yep. Sorry, what I wrote above was misleading. I will close this issue. If there are any questions please feel free to reopen. |
@schiffner One follow-up question, if I may. When I stack and tune the runs are over the cross product of all parameters, across models. That seems wasteful. If I have a method A with three values, I need three runs. Add a method B with three values, so I need three more. When I follow what you kindly outlined above, I end up with 3 x 3 = 9. Should models be tuned individually before a stacking of 'locally best' models is attempted? |
That's a good question... I don't really know. My thoughts are:
@giuseppec , @berndbischl : Any insights? |
Hi, @schiffner already mentioned the most important facts. One thing to add: When you do stacking using a superlearner, tuning the superlearner does not seem to be possible (see #697). |
Thanks for the clarification. @giuseppec. |
On my fork I believe I am tuning a stacked learner's super learner. It seems that you have to wrap the stacked learner in Example an be found here, though it's for my fork for forecasting so it won't work with the devel version of mlr. (yet!) Also, I have no idea if the resampling schemes are being implimented in the correct order / fashion. But nonetheless I get a tuned super learner. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Is there a tutorial example which combines tuning (which I have working) and stacking (which I have working too). But somehow I don't see (yet) how to "fuse" both approaches.
The text was updated successfully, but these errors were encountered: