Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why not default explain_instance to 1st top label, rather than 1st label? #32

Closed
asross opened this issue Oct 13, 2016 · 3 comments
Closed

Comments

@asross
Copy link

asross commented Oct 13, 2016

Here, at least for text, explain_instance will give you an explanation for why the model predicted the first label, but that's rarely the one you're most interested in by default. It seems like it would be much more intuitive to explain your model's actual (top) prediction by default, and would make it a bit easier for most users to get started using the library.

So basically I'm suggesting changing:

    def explain_instance(self,
                         text_instance,
                         classifier_fn,
                         labels=(1,),
                         top_labels=None,
                         num_features=10,
                         num_samples=5000,
                         distance_metric='cosine',
                         model_regressor=None):

to

    def explain_instance(self,
                         text_instance,
                         classifier_fn,
                         labels=None,
                         top_labels=(1,),
                         num_features=10,
                         num_samples=5000,
                         distance_metric='cosine',
                         model_regressor=None):
@marcotcr
Copy link
Owner

I am assuming the the most common use case is binary prediction. When the prediction is binary, you want to always explain label 1, as that keeps 0 on the left and 1 on the right in the visualization, even if 0 is the top predicted label.

@asross
Copy link
Author

asross commented Oct 14, 2016

Makes sense. You do know the number of classes, so you could default to label 1 for binary classification and the top label otherwise.

I do think a large percentage of users will want to use this library to explain the prediction their model actually made, and it seems like the default options should facilitate that. You could also add another function that wraps explain_instance, I guess.

@marcotcr
Copy link
Owner

I will make major changes in the interface to include other explanation methods I've developed soon, so I'll keep the suggestion in mind.
Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants