Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'models.MedianGCN' #2

Open
yuChen-XD opened this issue Nov 21, 2023 · 4 comments
Open

ModuleNotFoundError: No module named 'models.MedianGCN' #2

yuChen-XD opened this issue Nov 21, 2023 · 4 comments

Comments

@yuChen-XD
Copy link

Hi, I met an error: 'ModuleNotFoundError: No module named 'models.MedianGCN'' when I tried to run the script. It seems that the file MedianGCN is not included in the dir 'models'. I hope you can note that.
Thank you for your help in advance.

@ventr1c
Copy link
Owner

ventr1c commented Dec 1, 2023

Hi, thanks for your notice! This baseline is transferred from this repository https://github.com/EdisonLeeeee/MedianGCN. You can refer to this code to run it. And you can also directly remove this import to run our codes. Thanks for your attention again, we will further clean codes to release this baseline soon. If you have any other questions, please feel free to ask.

@yuChen-XD
Copy link
Author

Hi! Thank you very much for your replyment. I'll refer to the repository you provided. By the way, I'd appreciate it if you can provide the script of running the injection evasion methonds like TDGIA and AGIA, as the setting include many parameters. Thank you again for open-sourcing the code. The code is clear and suitable for beginners to learn.

@yuChen-XD
Copy link
Author

yuChen-XD commented Dec 27, 2023

Hi, I have a question about the difference in defense mode setting when testing on clean samples(10% of the total samples) compared to samples with implanted backdoors(another 10%). For example, I note that you use the 'prune' when testing the samples with implanted backdoors in your code, but you don't use it when testing the clean samples. Logically, users are unaware of whether the test data contains backdoors. Therefore, when calculating clean accuracy (clean acc), it should be consistent with testing on datasets with implanted backdoors. Both scenarios require the application of defense strategies if you choose to use the defense strategy in one scenary.
Is my understanding correct? Looking forward to your response.

@ventr1c
Copy link
Owner

ventr1c commented Feb 13, 2024

Hi,

Thanks for your questions. In our paper, the clean accuracy we report is usually based on the clean graphs without any modification. This setting is used to test the clean accuracy of the compared method. You are free to apply pruning to test accuracy if need.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants