-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some question about the Adaptive-Attention (AA) module #57
Comments
问题1: 是的; |
感谢回复。 |
不是,通过视觉信息、非视觉信息与h_t的关联程度来反映的,并且后接softmax对两类信息的贡献进行归一化。随着训练过程中loss的减小,视觉信息对视觉词预测的贡献增加,非视觉信息对非视觉词预测的贡献增加。 |
您好👋感谢你的代码分享,其中关于 Adaptive-Attention (AA) module,有几点想确认一下。
2.将h_t, v_t, k_t三者进行attention的目的,是为了使得相关性大的视觉/语音信号对h_t的贡献更大吗?
3.不知是否理解正确:train的时候对于不同time step的输出要将其concat起来,因为是并行进行的;而test的时候word prediction只是当前time step的结果。
The text was updated successfully, but these errors were encountered: