You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here, at least for text, explain_instance will give you an explanation for why the model predicted the first label, but that's rarely the one you're most interested in by default. It seems like it would be much more intuitive to explain your model's actual (top) prediction by default, and would make it a bit easier for most users to get started using the library.
I am assuming the the most common use case is binary prediction. When the prediction is binary, you want to always explain label 1, as that keeps 0 on the left and 1 on the right in the visualization, even if 0 is the top predicted label.
Makes sense. You do know the number of classes, so you could default to label 1 for binary classification and the top label otherwise.
I do think a large percentage of users will want to use this library to explain the prediction their model actually made, and it seems like the default options should facilitate that. You could also add another function that wraps explain_instance, I guess.
Here, at least for text,
explain_instance
will give you an explanation for why the model predicted the first label, but that's rarely the one you're most interested in by default. It seems like it would be much more intuitive to explain your model's actual (top) prediction by default, and would make it a bit easier for most users to get started using the library.So basically I'm suggesting changing:
to
The text was updated successfully, but these errors were encountered: