Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

confused with the conv layer #6

Open
tc-yue opened this issue Aug 6, 2018 · 3 comments
Open

confused with the conv layer #6

tc-yue opened this issue Aug 6, 2018 · 3 comments

Comments

@tc-yue
Copy link

tc-yue commented Aug 6, 2018

Att_v = tf.contrib.layers.conv2d(G, num_outputs=opt.num_class,kernel_size=[opt.ngram], padding='SAME',activation_fn=tf.nn.relu) #b * s * c
The implementation code above is a conv2D operation on the match matrix G.
while the formulations in the paper below seems only one filter with the size(2r+1) to produce a further match matrix (K*L). I think they are little different. Is it True?

$$u = Relu(GW+b)$$

where W ∈ R2r+1 and b ∈ R K u l ∈ R K

@tangqianyan
Copy link

Excuse me, could this code be applied to multi-label learning directly?

@ReactiveCJ
Copy link

@tc-yue it is just a conv1d so the size of W is (2r+1) * K * K

@lanjinraol
Copy link

@tc-yue Hello, I have same confusion with you. I think it's different between the two operations. In the paper, the filter with size(2r + 1) only captures contextual feature, while in the implementation code the K filters capture contextual features and relationships among the categories. Have you gotten any reasonable explanation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants