-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explainer for ranking task (lambdamart) #570
Comments
@doramir AFAIK this field is still to be explored. One of the methods that was proposed is this paper: https://arxiv.org/abs/1809.03857 |
Any development on this?
Do you think this approach is valid for local explanations |
Hello, I'm Jaspreet. One of the authors of the aforementioned paper. We have a couple of approaches for rankings that we recently published: We are in the process of releasing a package combining these approaches and the paper kretes mentioned. In the meanwhile, we are happy to provide any support on integrating these approaches into existing interpretability packages. Feel free to get in touch with me singh@l3s.de |
I'm using XGBoost with lambdaMart as objective (
rank:pairwise
)the problem with this model is the prediction of the model is a rank for each group and the score the model is giving for each item in a group is relevant only for the ranking inside the group and don't mean anything outside the group.
as far as I know, SHAP is not doing group wise (listwise) prediction and use each item (row) as an individual and not as part of a group, and tries to predict the why the model gave this value to this item.
is there a way to understand why the model gives this ranking to this group of items? why he put item x before item y and etc?
this problem is relevant to all ranking tasks, XGBoost, lightGBM, and CatBoost as well.
The text was updated successfully, but these errors were encountered: