You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In you paper, for each dataset, you pre-train the 200-dimensional word embeddings on each dataset, I want to ask that what datasets to train the word vector, just the model training dataset like IMDB or with other dataset like wiki ?
Thank you !
The text was updated successfully, but these errors were encountered:
Thanks for your question.
As mentioned in the paper, we pre-train the 200-dimensional word embeddings on each dataset in (Tang et al. ,2015a), such as IMDB and Yelp.
In you paper, for each dataset, you pre-train the 200-dimensional word embeddings on each dataset, I want to ask that what datasets to train the word vector, just the model training dataset like IMDB or with other dataset like wiki ?
Thank you !
The text was updated successfully, but these errors were encountered: