Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run GNNExplainer #24

Closed
paulamartingonzalez opened this issue May 20, 2021 · 8 comments
Closed

Run GNNExplainer #24

paulamartingonzalez opened this issue May 20, 2021 · 8 comments
Labels
xgraph Interpretability of Graph Neural Networks

Comments

@paulamartingonzalez
Copy link

paulamartingonzalez commented May 20, 2021

I am trying to run the GNN Explainer code. Although I have used the install.batch to generate the conda environment, I am having issues with the packages. Namely with tap (in the line "from tap import Tap") . I tried installing it myself and I can import tap but doesn't seem to find anything called Tap on it. Could you guide me here please?

Also, I was going through the example and I was wondering which settings should I change to run it on the graph level predictions. Is there an example for that? And, is it possible to get the feature importances and the important subgraphs? I didn't see any outputs in the explain functions.

Thanks in advance!

@Oceanusity
Copy link
Collaborator

Hi, thank you for your issue.

First, the Tap package can be installed by the command pip install typed-argument-parser==1.5.4.

Second, if you want to explain the graph level predictions, you can simply set the explain_graph attribute to True explain_graph=True. In addition, GNNExplainer returns the importance score for each edge, and you can find it as the return value edge_masks in the forward function of GNNExplainer. By the way, edge_masks is a list, where edge_masks[0] is the edge_mask for the class 0, and edge_mask[1] is the edge_mask for class 1.

@Oceanusity
Copy link
Collaborator

One more thing, there will be a default sigmoid function for the edge_masks value in the propagation method in the torch_geometric.nn.MessagePassing class. Thus, the edge_mask value will be in the range 0-1 after the sigmoid function, and the return value edge_masks in the forward function doesn't in the range 0-1.

@mengliu1998 mengliu1998 added the xgraph Interpretability of Graph Neural Networks label May 21, 2021
@paulamartingonzalez
Copy link
Author

paulamartingonzalez commented May 21, 2021

Many thanks for your help! I have everything installed now! Nevertheless, I am encountering a few issues when running the code:

  • if I run python -m benchmark.kernel.pipeline --task explain --model_name [GCN_2l/GCN_3l/GIN_2l/GIN_3l] --dataset_name [ba_shape/ba_lrp/tox21/clintox] --target_idx [0/2] --explainer GNNExplainer --sparsity [0.5/...] from the console, I get the error: -m: error: argument --target_idx: invalid int value: '[0/2]'
  • If I run python -m benchmark.kernel.pipeline --task explain --model_name GCN_3l --dataset_name clintox --target_idx 0 --explainer GNNExplainer --sparsity 0.5 --debug --vis --nolabel also on the console, I get the error: `TypeError: draw_networkx_nodes() got an unexpected keyword argument 'num_nodes'
  • And, when trying to run the script /benchmark/pipeline.py on spyder, I get the error AssertionError: Explaining on multi tasks is meaningless. that comes from benchmark/data/dataset.py

Any hint on how to make any of these work?

I'd like to explore the examples before I get to try my own data, so I might follow-up on the explanation bit and edge masks once I manage to get the code running 😃 . Thanks again!
`

@Oceanusity
Copy link
Collaborator

Hi, I have the following suggestions for the issues.

@paulamartingonzalez
Copy link
Author

paulamartingonzalez commented May 21, 2021

Many thanks again! Following that issue, the second point works well.

Now, when I try to run: python -m benchmark.kernel.pipeline --task explain --model_name GCN_3l --dataset_name ba_shape --target_idx 2 --explainer GNNExplainer --sparsity 0.5 I get

File "/home/martin09/DIG/dig/xgraph/GNNExplainer/benchmark/models/model_manager.py", line 45, in config_model
    print(f'#E#Checkpoint not found at {os.path.abspath(args.test_ckpt)}')
This overload of nonzero is deprecated:
	nonzero()
Consider using one of the following signatures instead:
	nonzero(*, bool as_tuple) (Triggered internally at  /opt/conda/conda-bld/pytorch_1595629395347/work/torch/csrc/utils/python_arg_parser.cpp:766.)

I tried reinstalling everything from scratch to check if that was the issue, but it persisted. Do you have any hint of what may be happening?

@paulamartingonzalez
Copy link
Author

Small update: using python -m benchmark.kernel.pipeline --task explain --model_name GCN_3l --dataset_name tox21 --target_idx 2 --explainer GNNExplainer --sparsity 0.3 --vis it worked, which makes me think that perhaps the model for the combination above did not exist.

I think I will close the issue, I will be exploring the code the next couple of days and I might reopen it with more questions. 😃 Thanks!

@paulamartingonzalez
Copy link
Author

paulamartingonzalez commented May 24, 2021

I am exploring the code on TOX21 dataset and I would like to understand a bit better the outputs (both the edge masks and the features masks). For the edge masks, is it the masks variable in sample_explain the one I should be looking at? For the feature masks, I don't really know where to find them. Would you mind pointing me? Is there a general feature mask for all the cohort? or is it one per graph?

Ultimately, I am looking to get something as Figure 5 in GNNExplainer paper: https://arxiv.org/pdf/1903.03894.pdf

Screenshot 2021-05-24 at 17 15 31

@Oceanusity
Copy link
Collaborator

For the GNNExplainer class, you can extract the node mask and edge mask from the class attribute self.node_feat_mask and self.edge_mask. Please take mask_features=True to explain the importance of node features.

For the figure, GNNExplainer provides the visualization as shown in README.md. It's not the same as the GNNExplainer paper, but I think it can help you with the figure to some extend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
xgraph Interpretability of Graph Neural Networks
Projects
None yet
Development

No branches or pull requests

4 participants