Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GraphGradCAMExplainer use of backpropogation #36

Open
CarlinLiao opened this issue Sep 21, 2022 · 1 comment
Open

GraphGradCAMExplainer use of backpropogation #36

CarlinLiao opened this issue Sep 21, 2022 · 1 comment

Comments

@CarlinLiao
Copy link

When using the GraphGradCAMExplainer, we use an pretrained torch GNN model set to eval mode since we're no longer training the model. However, to find the node importances, the Explainer module uses backpropogation to find the node importances via the weight coefficients of the hooked activation maps, which shouldn't be possible on an eval model instance.

image

For whatever reason, this doesn't throw an error in the recommended python 3.7, dgl 0.4.3post2, and torch 1.10 environment, but does in my more up-to-date python 3.9, dgl 0.9, torch 1.12.1 env even though the written code is identical.

The only solution I've found so far is to set the model used in the Explainer to training mode before running the explainer, but that's far from ideal.

Is there a way to find the node importances without committing to backpropogation? Is that what backpropogating in the original histocartography environment does instead? If it doesn't, is it not an issue that the model is being updated via backpropogation during the process of explaining node importance?

@CarlinLiao
Copy link
Author

Did a bit more of my own investigation and

  • The only modification I needed to make to get this work was to set the model to training, after the setup process creating the model sets it to training. I suspect that this might not be necessary with earlier versions of torch but couldn't confirm.
  • Doing backdrop doesn't change or update the model so long as we don't call step on the optimizer or use the gradient to calculate the update so this fine from a model use standpoint.
  • Stylistically, best practice is likely to keep the model in eval mode whenever we're not intending to update it. Can we do this importance calculation without leaning on the backprop functionality?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant