Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The question about the accuracy of the ImageNet dataset. #5

Open
OuYangLiang0509 opened this issue Apr 8, 2024 · 1 comment
Open

Comments

@OuYangLiang0509
Copy link

Hello author,
In your paper, the ImageNet dataset achieves an impressive accuracy of 75.92% with T=1 and ResNet-104. However, I am only able to achieve 70.56% accuracy using parameters consistent with your supplementary materials. Furthermore, when examining the events file you provided in TensorBoard, I noticed that the test set accuracy is 74.14% while the training set accuracy is only 64.25%. This discrepancy shows that the test set accuracy is 10% higher than the training set accuracy. I look forward to your response.

@oteomamo
Copy link

Are you running the tests on vanilla or attention based SNN? To enable attention, you should specify the type of attention you want to use in the Config.py file of each dataset under the self.attention = "no" hyperparameter. You can set this to CA, TA, SA, CSA, TCA, TSA, TCSA, or no. After this you should get results within the margin of error. However one 'problem' I faced with with paper was the clip hyperparameter. I ran all my tests with clip as 1 as got the same results as the ones in the paper. Are you running your code using clip 1 too?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants