Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于sota的疑问 #12

Closed
wn1652400018 opened this issue Mar 14, 2022 · 2 comments
Closed

关于sota的疑问 #12

wn1652400018 opened this issue Mar 14, 2022 · 2 comments

Comments

@wn1652400018
Copy link

wn1652400018 commented Mar 14, 2022

作者你好,在论文中提到在多个ner数据集中都达到了sota,但是我在https://paperswithcode.com/area/natural-language-processing/named-entity-recognition-ner 上发现不少数据集的sota都比论文中提到的高,请问这是怎么回事?

@ljynlp
Copy link
Owner

ljynlp commented Mar 14, 2022

由于论文的出发点不同,一些论文其实并不好放在一起比较,比如使用了更大更好的预训练模型、对预训练模型进行了优化、使用了外部知识等,这些方法带来的收益一般会比修改模型和标签体系大一些,对比起来也不太公平。
所以我们所说的SOTA是相对的,只是量化模型效果的一种体现,其本质是为了表示我们在解决Unified NER的同时也能够保证不错的性能。

@wn1652400018
Copy link
Author

wn1652400018 commented Mar 14, 2022 via email

@ljynlp ljynlp closed this as completed Mar 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants