You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I come up with an idea what if I load an explainer with the appropriate data train once, then I test it with same feature vector more than once. When I take a look at top 5 reasons, probability of each feature shown is always vary (with slightly changing) on every hit. And for some cases the probability shifting may change the top 5 reasons itself. I don't see it as a natural behavior, since I used the same testing data.
When I try to take a look at the code it self, I suspect there is something happen when you create sample in perturbed data around the testing instance. maybe you need to control the random seed whenever it contact with "random things" or maybe there is something wrong with my trial let me know.
The text was updated successfully, but these errors were encountered:
See #67. This is not a bug - it is expected, as we rely on random sampling. If you want the explanation to be the same every time, you need to set a random seed.
I come up with an idea what if I load an explainer with the appropriate data train once, then I test it with same feature vector more than once. When I take a look at top 5 reasons, probability of each feature shown is always vary (with slightly changing) on every hit. And for some cases the probability shifting may change the top 5 reasons itself. I don't see it as a natural behavior, since I used the same testing data.
When I try to take a look at the code it self, I suspect there is something happen when you create sample in perturbed data around the testing instance. maybe you need to control the random seed whenever it contact with "random things" or maybe there is something wrong with my trial let me know.
The text was updated successfully, but these errors were encountered: