-
-
Notifications
You must be signed in to change notification settings - Fork 26.4k
[MRG] Gradient boosting OOB improvement #2188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Is this what the R package does? It makes sense to me. |
yes (but R's gbm also smooths the OOB improvements using LOESS) 2013/7/22 Andreas Mueller notifications@github.com
Peter Prettenhofer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pl is the standard :)
I like the example but I feel it is a bit hard to read. Could you add some comments please? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n
=> n_samples
?
@arjoly addressed - thx |
@amueller I hope the example is clearer now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(nitpicking) maybe import directly GradientBoostingClassifier
I would add a test to check that Otherwise LGTM. |
@arjoly I've added two regression tests and updated the narrative docs |
Thanks for addressing the comments. 👍 for merge. |
LGTM! |
merged - thanks for the reviews! |
This PR adresses issue #1802 by @yanirs.
It sets
oob_score_
deprecated and introducesoob_improvement_
which gives the relative improvement of adding the i-th tree on the out-of-bag examples. The PR includes an example that shows howoob_improvement_
can be used to estimate the "optimal" number of iterations (basically an alternative to cross-validation).