-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Arbitrary base learner #5802
Comments
@zachmayer Is |
Oops, my bad. When you say "base learner," you mean that you want to fit a boosted ensemble consisting of your custom models? |
Yes exactly.
So for example if I wanted to boost a kernel Svm I could do that
…On Tue, Jun 16, 2020 at 5:44 PM Philip Hyunsu Cho ***@***.***> wrote:
Oops, my bad. When you say "base learner," you mean that you want a custom
model as part of the ensemble?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#5802 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEN7VSYLUUOLIVMOYLAO63RW7RVBANCNFSM4OAABSMQ>
.
|
Here is a related issue that has just been opened: Adding AdaBoost to cuml might be a good stop-gap measure. |
sklearn adaboost already supports arbitrary base learners: XGboost is way better than adaboost though, and supports a bunch of features adaboost doesn't have:
|
@tunguz "Run adaboost on a gpu" isn't really what I'm looking for. "Run adaboost with an arbitrary base learner, arbitrary loss function, arbitrary gradient, arbitrary evaluation, early stopping, and a mix of parallel learners (aka bagging) and boosting" would suit my needs, but that's another way to say "run xgboost with an arbitrary base learner" 😁 |
Its pretty cool that I can define my own loss function and gradient for xgboost, and then use the linear, tree, or dart base learners to optimize my loss function.
It'd be really cool if I could specify my own base learner, perhaps in the form of an sklearn class with a fit method, a predict method, and support for sample weights.
It'd really open up a whole new world of possibilities to be able to use the Xgboost algorithm to fit a wider range of possible base learners.
The text was updated successfully, but these errors were encountered: