You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A dumb question, after training a char-based RNN language model with large data set (say millions of words), how could one utilize the model for different tasks like sentiment analysis?
Most of the exampels I've seen so far combined model training and sentiment analysis in one go (normally a LookupTable as the first layer for word embeddings then two LSTM layers after it...etc), I was wondering wouldn't it be more efficient to reuse a pre-trained RNN model and repurpose it for classification tasks?
A dumb question, after training a char-based RNN language model with large data set (say millions of words), how could one utilize the model for different tasks like sentiment analysis?
Most of the exampels I've seen so far combined model training and sentiment analysis in one go (normally a LookupTable as the first layer for word embeddings then two LSTM layers after it...etc), I was wondering wouldn't it be more efficient to reuse a pre-trained RNN model and repurpose it for classification tasks?
Is it possible and how? Thanks!
Here are some sentiment analysis examples:
https://github.com/Element-Research/rnn/blob/master/examples/sequence-to-one.lua
https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py
The text was updated successfully, but these errors were encountered: