-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Init score implementation #1778
Comments
basically, lightGBM will train from the residual of a certain dataset in each round of training to build a weak tree model, and the practically speaking, the `init_score' was used in the scenario that you need further training your model against some dataset, and use it to represent the previous model. |
For predictions, you should add initial scores by yourself |
If I am understanding properly, the scores I obtain from the predictions of a model trained with init_score would be: |
yeah, and you should use the |
不好意思,关于这个问题想再请教一下。 |
|
再请问一下,lgb.predict(raw_score=True) == lgb 的 raw_score + 之前学习器的 raw_score 还是 lgb.predict(raw_score=True) == lgb 的 raw_score 十分感谢。 |
@JYLFamily The prediction result doesn't include the init_score, you should add it by yourself. |
I am using lightgbm for modelling a variable which is poisson distributed.
For this variable I have a bias provided by the logarithm of "e", where "e" is one of the predictors.
If I have understood properly, I could set the init score as the logarithm of "e" to train lightgbm.
Unluckily I cannot use any init score for the predictions since the predict() method does not support the Dataset object.
I wonder how is this "init score" is implemented? And why there is no need of using it for the predictions?
Thanks in advance
The text was updated successfully, but these errors were encountered: