New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
f1_with_ner2 #37
Comments
全悬浮标记的做法就是非常差的,因为悬浮标记本身试用attention进行绑定,如果还用attention进行绑定四个的话绑定效果比较弱,我们也实现了全悬浮marker作为论文中ablation study里的一项目: https://github.com/thunlp/PL-Marker/blob/master/run_levitatedpair.py |
您觉得是什么导致在全悬浮模型的训练过程中,ner_f1持续降低,但是在半悬浮模型中ner_f1始终能够保持1.0的呢 |
实现的问题吧,你可以直接用我的实现run_levitatedpair.py |
现在输出目录跟之前的目录重叠了,导致最后测试加在模型时加载错了,加载为之前模型的参数了:weights from pretrained model not used in BertForACEBothxxxx,请修改--output_dir为一个新目录 |
好的,谢谢! |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
您好,我在您代码基础上改了一版代码,想要实现全悬浮标记的方法。运行结果表示f1达到了预期效果,但是f1_with_ner,和ner_f1的结果特别差,并且ner_f1的结果随着训练变得越来越差。我找了很久没有找到问题,我似乎用的也是golden的dev文件做的训练呀,为什么ner的f1一直在下降,但是您的代码中对应的ner_f1一直是1.0呢。如果您能够帮我看看代码问题出在哪了就更好了,或者您告诉问题可能出现在哪里也非常感谢。下面是训练的部分截图,最后附上我修改后的代码和运行脚本,谢谢
run_train_re_approx.zip
The text was updated successfully, but these errors were encountered: