-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relu activation in PrimaryCap? #25
Comments
In fact, I have the same doubts. I've read the paper again and again, but I can't find out any words in the paper whether PrimaryCaps has a ReLU activation function or not. So I just let it go. Maybe I should write some comments in the code to note that. I'll try it out which is better, and use it as the default. Thank you for your comment |
Fix naturomics#25, rm relu activation fun from primarycap
@oargueta3 I've done a experiment for this doubts. The results show that using ReLU has a higher test accuracy(batch size 48, 50 epoch,~99.4) than no ReLU(batch size 48, 30 epoch, the best result is 99.1 at step 8600, but most time it maintains at 99.0) |
@naturomics , thanks for your work, and it is great and really quick! It seems that there's no ReLU between capsule and squashing. As mentioned in the third paragraph in Sec. 4 of the original paper, both v1 and v2:
So, I think we may confirm that no ReLU is placed there. Any suggestions? |
the tf.contrib.layers.conv2d applies a relu activation,the PrimaryCap convolution does not included a relu activation before grouping neurons into capsules and then squashed, or did I miss something from the paper
CapsNet-Tensorflow/capsLayer.py
Line 59 in 894c79c
The text was updated successfully, but these errors were encountered: