You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
---原始邮件---
发件人: "Jingye ***@***.***>
发送时间: 2022年3月14日(周一) 晚上6:28
收件人: ***@***.***>;
抄送: ***@***.******@***.***>;
主题: Re: [ljynlp/W2NER] 关于sota的疑问 (Issue #12)
由于论文的出发点不同,一些论文其实并不好放在一起比较,比如使用了更大更好的预训练模型、对预训练模型进行了优化、使用了外部知识等,这些方法带来的收益一般会比修改模型和标签体系大一些,对比起来也不太公平。
所以我们所说的SOTA是相对的,只是量化模型效果的一种体现,其本质是为了表示我们在解决Unified NER的同时也能够保证不错的性能。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you authored the thread.Message ID: ***@***.***>
作者你好,在论文中提到在多个ner数据集中都达到了sota,但是我在https://paperswithcode.com/area/natural-language-processing/named-entity-recognition-ner 上发现不少数据集的sota都比论文中提到的高,请问这是怎么回事?
The text was updated successfully, but these errors were encountered: