Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问p71_TextRCNN_model.py中rnn-cnn layer定义原理是什么?最后ensemble left, embedding, right to output的方式不像Bi-LSTM layer呀 #51

Closed
JepsonWong opened this issue May 8, 2018 · 6 comments

Comments

@JepsonWong
Copy link

No description provided.

@brightmart
Copy link
Owner

brightmart commented May 8, 2018 via email

@fei161
Copy link

fei161 commented May 9, 2018

The meaning of the paper should be two layers of lstm followed by cnn layer features, why is there no lstm and cnn in your code?

@JepsonWong
Copy link
Author

When you get context right, you reverse the input sequences, but you don't reverse the output sequences. I think the output seqences should be reversed.
By the way, do you implement a rnn cell by yourself in your source code? So you don't use the tensorflow's api.
thanks.@brightmart

@JepsonWong
Copy link
Author

I guess the author implement a rnn cell in his source code.

def instantiate_weights(self):

this is the rnn weights.
@fei161

@fei161
Copy link

fei161 commented May 9, 2018

The author first makes a full connection to the context. Then concatenates the context vector concat and maximizes it. Finally connects to the full connection layer.
@JepsonWong

@JepsonWong
Copy link
Author

yes, i know this. I just think that when we get context right, we should reverse the output sequences.
@fei161

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants