New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About the results #2
Comments
The result index you get in the bert version is slightly lower, my parameter selection is batchsize=4, lr=3e-5, you can try to modify your hyperparameters to see the result. |
Thank you! now, the dataset 15res can reach 0.695(best relation F1) in the eval, can you reappear the results of this paper in the dataset 15res? |
After the bert version I implemented was trained in the 15res dataset, the f1 value of the validation set was 69.1 and the result(70.75) in the paper was still slightly different. I am still exploring. |
hi, i tried bilstm on the 14res.txt, but i got a low score like 0.527(relation F1). i have tried to adjust the parameters but didnt work. |
I think there may be a problem with the F1 calculation.Because the F1 of the whole dataset is not calculated by being averaged from the F1 of each batch. |
yeah, you are right, he has a problem with metrics |
正如你们提到的问题,我在最新的版本进行了修复,同时更新了bert 版本,感谢你们提出的宝贵意见 |
For this dataset 15res, the bilstm version(your code) i got the best f1 in the eval is 0.6382,the paper is 0.6426, and the bert version i got the best f1 in the eval is 0.6564, the paper is 0.7075. I can't get close to the results of the paper, can you give me some suggestions? Thank you very much!
The text was updated successfully, but these errors were encountered: