-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
m2cgen output for xgboost with binary:logistic objective returns raw (not transformed) scores #96
Comments
Hey @ehoppmann! Thank you so much for reporting this issue and for your kind feedback! When you say "transformed" do you mean a probability value between 0 and 1? If so then this is exactly what code generated by Can you perhaps share the very last line of the |
Hey there. I am indeed passing an instance of type |
@ehoppmann I see, thank you for sharing! Apparently the decision whether to apply the sigmoid or not should be made based on an objective function instead. I believe that the described issue is a bug and that it should be fixed. Thank you so much for reporting it! |
@ehoppmann As a workaround you can just try passing the |
hi,in the xgb2c code,what does output in the parameter of the function score mean and how can i get the predicted prob [0, 1] , thanks |
Our xgboost models use the
binary:logistic'
objective function, however the m2cgen converted version of the models return raw scores instead of the transformed scores.This is fine as long as the user knows this is happening! I didn't, so it took a while to figure out what was going on. I'm wondering if perhaps a useful warning could be raised for users to alert them of this issue? A warning could include a note that they can transform these scores back to the expected probabilities [0, 1] by
prob = logistic.cdf(score - base_score)
wherebase_score
is an attribute of the xgboost model.In our case, I'd like to minimize unnecessary processing on the device, so I am actually happy with the current m2cgen output and will instead inverse transform our threshold when evaluating the model output from the transpiled model...but it did take me a bit before I figured out what was going on, which is why I'm suggesting that a user friendly message might be raised when an unsupported objective function is encountered.
Thanks for creating & sharing this great tool!
The text was updated successfully, but these errors were encountered: