Applied Deep Learning (2020 Spring) @ NTU
-
architecture:
use BiLSTM -> summary extraction use seq2seq -> summary abstraction (encoder, decoder implement) use seq2seq + attention -> summary abstraction -> (encoder, decoder, attention)
-
data: please download here
-
Data Processing
1. load pretrained embedding from Glove 2. Tokenization with keras tokenizer - Fit on text - text to sequence - pad sequence
-
Training
-
BiLSTM for Summary Extraction
-
seq2seq for summary abstraction
-
seq2seq + attention for summary abstraction
-