-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question for version #4
Comments
@jind11 the training is very fast even on cpu, so you can train you own model: $ git pull git@github.com:FrankWork/acnn.git
$ git pull origin baseap
$ git checkout baseap
$ python main.py I rerun the program, and got the following results:
|
If you can reach the performance in the paper, please let me know. |
I have one question: where do you get the pre-trained embedding file? Is it trained on English Wikipedia? Thanks! |
I see. According to my experiment on different embedding sources, the pre-trained embeddings has influence on the performance. I am gonna train word2vec 300 and 400 dimension on English Wikipedia myself. |
I noticed that in the log file, the version "baseline+attentive pooling" can get the result: 05-10 21:12 Epoch: 21 Train: 94.81% Test: 75.19%. What are the model configurations in details for this result? If possible, could you send me the model file for this result? My email is jindi15@mit.edu. I have tried my best but cannot reach this performance. Thank you so much!
The text was updated successfully, but these errors were encountered: