-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ask about the result #9
Comments
Could you run experiment using docker with the DockerFile in repo? The trained model is uploaded in release section, and you can evaluate it to see if number matches. |
have you solved your problem after set a bigger batch size? I have the same problem now,my batch size is 24,but my result is lf(eg)=78,ex(eg)=82,Thank you!! |
Hi, could you refer to #4 , where larger batch size should improve the accuracy. In addition, in the post, the subtask accuracy at epoch 0 was good with batch size 32 (except for val acc which had bug before). If your subtask accuracy was significant lower with similar batch size, please double check package version, especially pytorch and transformers. |
Hi @Smile0524 @1456416403 , I re-run a few experiments with smaller batch size, results are listed below. It shows that larger batch size gives better result. I recommend to train with batch size > 64, which should give similar results as best model trained with batch size 256. batch size 24 batch size 64 batch size 128 |
Thank you very much! |
Hello!
I read the paper you published and i also git clone the code .I run the code successfully.But the result I got is far from your result.
[wikidev.jsonl, epoch 4] overall:76.1, agg:89.1, sel:96.3, wn:97.1, wc:92.1, op:98.4, val:91.1.
I find that the where_col result and the where_val is worse than other.
What do you think about these issues?
The text was updated successfully, but these errors were encountered: