Skip to content

Abstractive Text Summarization using two approaches (1) GRU+Attention (2) Transformers

License

Notifications You must be signed in to change notification settings

workofart/abstractive-text-summary

Repository files navigation

Abstractive Text Summarization

Authors: Hanxiang (Henry) Pan, Xinyu Ma

We approached this problem by using a baseline model that's composed of a gated recurrent autoencoder with attention mechanism.

We analyzed it's performance and pitfalls and implemented the transformer architecture with multi-headed (self) attention mechanism.

We achieved a ROUGE-1 score of 0.35 on the validation set and 0.37 on the training set.

Code

The baseline model can be found in Project_Baseline.ipynb.

The improved model can be found in Project_Improved.ipynb.

Report

All the analyses and discussions can be found in Final_Project.pdf.

License

MIT license.

About

Abstractive Text Summarization using two approaches (1) GRU+Attention (2) Transformers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published