-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make models compatible to Captum #3990
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you :) This looks great. Left a few but minor comments :)
examples/captum_explainability.py
Outdated
data = dataset[0] | ||
|
||
|
||
class Net(torch.nn.Module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we use torch_geometric.nn.models.GCN
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used the exact same model as in examples/gnn_explainer.py
. Should I change it nevertheless?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think it is better to directly use pre-defined models.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update. After adding a Captum test, I think this PR is ready to get merged.
x = F.relu(self.conv1(x, edge_index)) | ||
x = F.dropout(x, training=self.training) | ||
x = self.conv2(x, edge_index) | ||
return F.log_softmax(x, dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, what does that mean in particular? Is the usage of log_softmax
correct?
Co-authored-by: Matthias Fey <matthias.fey@tu-dortmund.de>
Co-authored-by: Matthias Fey <matthias.fey@tu-dortmund.de>
Co-authored-by: Matthias Fey <matthias.fey@tu-dortmund.de>
Co-authored-by: Matthias Fey <matthias.fey@tu-dortmund.de>
As discussed in issue #3858, we can add a
to_captum
method for simple use of Captum. The method adjusts the forward function of a model. I have added an example of edge explainability for node classification.I have tested several Captum methods (IntegratedGradients, Saliency, InputXGradient, GuidedBackprop, Deconvolution, ShapleyValueSampling, Kernelshap, FeatureAblation), all of which work appropriately with the Captum example given here.
@rusty1s I have implemented a
CaptumModel
class, theto_captum
method seems redundant. Alternatively, I could just overwrite the forward function instead of creating a new model. Do you have a preference and do you see an advantage in not creating a new model?