You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you guys for your amazing work. May I ask if there is anything important that requires modification based on the current code uploaded to github? I was trying to reproduce the result for Spikformer-8-512 on Imagenet. However, the result I got was quite different from the one shown in the paper.
When it comes to epoch 25, the top1-acc shown in the paper is clearly over 50%, while my reproduced result is barely over 40%. I used the exactly same code as in this github, except I used a batch size of 24 due to my GPU limitation.
Please enlight me. Thank you so much!
The text was updated successfully, but these errors were encountered:
Please provide the curve of the full training 300 epoches, because the training process will be different with different hyperparameters, rather than just looking at the partial convergence curve
The hyperparameters are not changed as the one shown in this github repo.
Additionally, may I ask how long did you guys take to train thie model on ImageNet?
Hi dear authors,
Thank you guys for your amazing work. May I ask if there is anything important that requires modification based on the current code uploaded to github? I was trying to reproduce the result for Spikformer-8-512 on Imagenet. However, the result I got was quite different from the one shown in the paper.
When it comes to epoch 25, the top1-acc shown in the paper is clearly over 50%, while my reproduced result is barely over 40%. I used the exactly same code as in this github, except I used a batch size of 24 due to my GPU limitation.
Please enlight me. Thank you so much!
The text was updated successfully, but these errors were encountered: