New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The NER results for 5 times runs #14
Comments
Hi Deming, We used the predictions from NER models with different seeds. You can download Best, |
Thanks! |
Hi Zexuan, Due to the cuda memory limitation, I add the gradient accumulation to your code and try to run ALBERT with seed 0~4, but I get entity F1 as 89.71,89.93,89.53,89.93,90.18. Could you share all the ent_pred_test.json for ALBERT (cross, W=100)? It is important for me to fairly compare with your RE method. Thanks! Best, |
Hi Deming, I am not sure how you modified the code exactly. My runs are based on 4 GPUs without the gradient accumulation. I have added all the ent_pred_test.json for ALBERT (cross, W=100) here. Best, |
Thanks for your rapid reply! I will try to run with multiple GPUs. Best, |
Hi Zexuan, Sorry to brother you again. Could you additionally share the ent_pred_test.json for SciERC? Thanks! Best, |
Hi Deming, Sure! Please check here. Best, |
Thanks for your rapid reply! Best, |
Hi Zexuan,
The paper reports the average Rel F1 over 5 runs. Do the 5 RE runs use the same ent_pred_test.json (e.g. from the BERT (cross, W=300) entity model) or they use different NER predictions from NER models with different seeds?
Could you please share the ent_pred_test.json for the 5 RE runs?
Thanks!
Best,
Deming
The text was updated successfully, but these errors were encountered: