-
Notifications
You must be signed in to change notification settings - Fork 66
[ML] Adds exponent aggregator to the inference model definition #1375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! I think the comment needs updating and I'd like to make sure this fails to compile if we add a new loss function, but otherwise looks good.
//! Allows to use logistic regression aggregation. | ||
//! | ||
//! Given a weights vector $\vec{w}$ as a parameter and an output vector from the ensemble $\vec{x}$, | ||
//! it computes the logistic regression function \f$1/(1 + \exp(-\vec{w}^T \vec{x}))\f$. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment looks wrong. The forest is predicting sum_i{ leaf_i } = log(1 + x)
, so for the weighted case the aggregator would be exp(w^t x) - 1
no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, thinking about it, 1 is the offset parameter which we also need to supply.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, the aggregator computes exp(w^t x)
cf. CMsle::transform
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok great.
And the comment on the offset was me forgetting what we're actually doing. For posterity, we compute leaf values by argmin_x (log(offset + exp(x)) - log(offset + actual))^2
, i.e. the important part is that prediction is exponentiated so the forest is effectively predicting logs. The offset just governs how we penalise relative errors between the predictions and actuals at the scale of offset.
Thank you @tveasey for the review. I addressed your comments. It would be great if you could take another look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
retest |
…tic#1375) This PR adds exponent aggregator required for the regression models trained with MSLE objective function.
…tic#1375) This PR adds exponent aggregator required for the regression models trained with MSLE objective function.
This PR adds exponent aggregator required for the regression models trained with MSLE objective function.
Closes #1372