Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Certainty of model predictions and explainers are two different things #328

Open
azqanadeem opened this issue Jun 1, 2022 · 0 comments
Open

Comments

@azqanadeem
Copy link

In Section 3.5, Properties of Individual Explanations, "Certainty" only covers the confidence of the ML model, not the confidence of the explainer itself. Both are important: Recent research [1] on adversarial attacks on XAI methods has shown that the model prediction and the explanation can both be targeted, individually and together, however an adversary likes. This implies that in addition to outputting how confident a model is about a certain prediction, it's also important to show how confident the explainer is about the explanation it produces.

[1] Dombrowski, A. K., Alber, M., Anders, C., Ackermann, M., Müller, K. R., & Kessel, P. (2019). Explanations can be manipulated and geometry is to blame. Advances in Neural Information Processing Systems, 32.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant