-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to easily extracting attentions? #2
Comments
It is easy to output the attention weights using LeafNATS. If you are just using the pretrained models, the headline2_summary2_app can output the generated summaries/headlines along with the attention weights. I also observed that problem. Actually, it is an issue for Pointer-Generator network (90% of a summary is copied from source document). You can refer the paper |
Thanks for quick reply! Thanks! |
You can try out different network structures with following options: parser.add_argument('--rnn_network', default='lstm', help='gru | lstm') I have just implemented the application module in pointer-generator-network. You can output attention weights as well just like headline2_summary2_app. |
Hello Tian Shi,
Thanks for sharing the project!
Is it easy to visualize attention like this?
![Screenshot at Mar 24 17-03-43](https://user-images.githubusercontent.com/1150493/54885827-ce7d8f80-4e56-11e9-82b3-98955cb031b6.png)
BTW, I pasted following content into the demo website. The summary/highlights is exact copy of the source text. Is this because the text is too short? And why can't I input one sentence for abstractive summary?
The text was updated successfully, but these errors were encountered: