-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CapsNet #6
Comments
If you look in the model file you'll see one convenient layer, a primary
layer and a digit caps layer.
This pattern is also what the dynamic routing paper used.
EM paper had many more repeating layers.
…On Sun, Jul 29, 2018, 8:21 AM ussaema ***@***.***> wrote:
are you using CapsNet using dynamic routing or EM routing. The title of
your paper does not match your content, therefore I was confused!
And did you modify anything in the structure of the baseline CapsNet or
the generator with respect to WGAN ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#6>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADmCPLNoOVMyxcW-zq4pn0Hy_V3CYJO3ks5uLajmgaJpZM4VlZqV>
.
|
I already looked at the code in models directory, It is CapsuleNet using dynamic routing with margin loss. |
Totally
I was only remarking on the layers because it's easy to see quickly. Every
implementation Ive seen has used the same number of layers.
…On Sun, Jul 29, 2018, 9:15 AM ussaema ***@***.***> wrote:
I already looked at the code in models directory, It is CapsuleNet using
dynamic routing with margin loss.
EM paper has indeed more repeating layers, but the routing (clustering of
the pose predictions) is not based on projections (cosine similarities). It
is, in fact, based on a soft version of k-mean clustering which is EM
clustering. So, I don't think it is just about the number of capsule layers
used, but on the efficiency of the routing and the loss function.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADmCPJjlMtGrw7Kta7MuvCfu5wDywVLwks5uLbVxgaJpZM4VlZqV>
.
|
I have yet to dig into EM routing! Still honestly haven't had the time to
sit down and understand capsule nets more generally
On Sun, Jul 29, 2018, 11:46 AM Ryan Lambert <ryan.g.lambert@gmail.com>
wrote:
… Totally
I was only remarking on the layers because it's easy to see quickly. Every
implementation Ive seen has used the same number of layers.
On Sun, Jul 29, 2018, 9:15 AM ussaema ***@***.***> wrote:
> I already looked at the code in models directory, It is CapsuleNet using
> dynamic routing with margin loss.
> EM paper has indeed more repeating layers, but the routing (clustering of
> the pose predictions) is not based on projections (cosine similarities). It
> is, in fact, based on a soft version of k-mean clustering which is EM
> clustering. So, I don't think it is just about the number of capsule layers
> used, but on the efficiency of the routing and the loss function.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#6 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ADmCPJjlMtGrw7Kta7MuvCfu5wDywVLwks5uLbVxgaJpZM4VlZqV>
> .
>
|
I understand :) you are using the same loss (margin loss) for the generator and the discriminator, right ? |
In my project I was not using GAN, just capsule network by itself. Yes I was using margin loss. |
are you using CapsNet using dynamic routing or EM routing. The title of your paper does not match your content, therefore I was confused!
And did you modify anything in the structure of the baseline CapsNet or the generator with respect to WGAN ?
The text was updated successfully, but these errors were encountered: