Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question for version #4

Open
jind11 opened this issue Dec 6, 2017 · 5 comments
Open

question for version #4

jind11 opened this issue Dec 6, 2017 · 5 comments

Comments

@jind11
Copy link

jind11 commented Dec 6, 2017

I noticed that in the log file, the version "baseline+attentive pooling" can get the result: 05-10 21:12 Epoch: 21 Train: 94.81% Test: 75.19%. What are the model configurations in details for this result? If possible, could you send me the model file for this result? My email is jindi15@mit.edu. I have tried my best but cannot reach this performance. Thank you so much!

@FrankWork
Copy link
Owner

FrankWork commented Dec 8, 2017

@jind11 the training is very fast even on cpu, so you can train you own model:

$ git pull git@github.com:FrankWork/acnn.git
$ git pull origin baseap
$ git checkout baseap
$ python main.py

I rerun the program, and got the following results:

12-08 14:22 Epoch: 16 Train: 74.76% Test: 73.93%
12-08 14:22 Epoch: 17 Train: 74.95% Test: 74.59%
12-08 14:22 Epoch: 18 Train: 75.59% Test: 74.56%
12-08 14:23 Epoch: 19 Train: 76.80% Test: 74.22%
12-08 14:23 Epoch: 20 Train: 77.56% Test: 73.93%
12-08 14:23 Epoch: 21 Train: 78.33% Test: 74.78%
12-08 14:23 Epoch: 22 Train: 78.45% Test: 75.04%
12-08 14:24 Epoch: 23 Train: 79.99% Test: 75.30%
12-08 14:24 Epoch: 24 Train: 80.29% Test: 75.44%
12-08 14:24 Epoch: 25 Train: 79.96% Test: 75.15%

@FrankWork
Copy link
Owner

If you can reach the performance in the paper, please let me know.

@jind11
Copy link
Author

jind11 commented Dec 21, 2017

I have one question: where do you get the pre-trained embedding file? Is it trained on English Wikipedia? Thanks!

@FrankWork
Copy link
Owner

FrankWork commented Dec 21, 2017

@jind11
Copy link
Author

jind11 commented Dec 21, 2017

I see. According to my experiment on different embedding sources, the pre-trained embeddings has influence on the performance. I am gonna train word2vec 300 and 400 dimension on English Wikipedia myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants