-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test on smallNORM with parameters specified in the paper have very bad result #24
Comments
Facing a similar issue. In the initial runs, the result was quite bad with the smallNORB. After trying multiple runs, I was able to get a decent accuracy. I got similar huge variance in validation accuracy for the training runs with MNIST. @www0wwwjs1 do you know what might cause this much variance? |
Because the test script doesn't read the latest training parameters, it reads the whole models saved starting from the first iteration, hence the first training parameters will have a terrible accuracy, try using the latest training parameters only, the output should be stable. |
Has anyone here had similar results to me on smallNORB? Images for output are in issue #30 This is just the training loop. |
I'm not sure if the dataset is loaded correctly: The smallNorb dataset contains 24300 stereo images, but the model loads 24300*2 images with depth 1. I think this is wrong and possible this leads to the bad performance... |
Thanks for posting the code!
I tried your code on smallNORM dataset with the parameter specified in the paper: A=64 B=8 C=D=16, routing iteration = 3, batch_size = 64 (set the number myself). But the result is very bad (cannot even converge). However in the paper the author said the accuracy should be 97.8%. I am wondering why it is so sensitive to how many capsules in A and also routing iterations?
And could the author post more testing result with different number of capsules/ routing iteration/ learning rate/ batch size etc.
Thank you very much!
The text was updated successfully, but these errors were encountered: