New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问p71_TextRCNN_model.py中rnn-cnn layer定义原理是什么?最后ensemble left, embedding, right to output的方式不像Bi-LSTM layer呀 #51
Comments
this is just like taking context information(from left and right) into considering when encode a position to get rich information.
…________________________________
发件人: JepsonWong <notifications@github.com>
发送时间: 2018年5月8日 10:35
收件人: brightmart/text_classification
抄送: Subscribed
主题: [brightmart/text_classification] 请问p71_TextRCNN_model.py中rnn-cnn layer定义原理是什么?最后ensemble left, embedding, right to output的方式不像Bi-LSTM layer呀 (#51)
―
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#51>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ASuYMBBs4EfV-B6kxlCdTJQoiUPb648Nks5twQRVgaJpZM4T17Mv>.
|
The meaning of the paper should be two layers of lstm followed by cnn layer features, why is there no lstm and cnn in your code? |
When you get context right, you reverse the input sequences, but you don't reverse the output sequences. I think the output seqences should be reversed. |
I guess the author implement a rnn cell in his source code.
this is the rnn weights. @fei161 |
The author first makes a full connection to the context. Then concatenates the context vector concat and maximizes it. Finally connects to the full connection layer. |
yes, i know this. I just think that when we get context right, we should reverse the output sequences. |
No description provided.
The text was updated successfully, but these errors were encountered: