Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RepEval(WS)-2016-Evaluating Word Embeddings Using a Representative Suite of Practical Tasks #301

Open
BrambleXu opened this issue Dec 17, 2019 · 0 comments
Assignees
Labels
Analysis(T) Analysis (Paper) Task Embedding Embedding/Pre-train Model/Task

Comments

@BrambleXu
Copy link
Owner

BrambleXu commented Dec 17, 2019

Summary:

评价word embedding的方法只依靠word similarity benchmark不好,所以本文提出了vivo,通过一系列的downstream tasks来评价word embeddings。还有online的版本,提交后就能自动进行评价。

Resource:

  • pdf
  • [code](
  • [paper-with-code](

Paper information:

  • Author: Manning
  • Dataset:
  • keywords:

Notes:

关于评价的话,其实可以多用几个数据集,来证明一下,基于character的语言的coverage对于NER的评判
(让评价更方便的系统,也可以写一篇论文)

Model Graph:

Result:

image

image

Thoughts:

Next Reading:

@BrambleXu BrambleXu added Embedding Embedding/Pre-train Model/Task Analysis(T) Analysis (Paper) Task labels Dec 17, 2019
@BrambleXu BrambleXu self-assigned this Dec 17, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Analysis(T) Analysis (Paper) Task Embedding Embedding/Pre-train Model/Task
Projects
None yet
Development

No branches or pull requests

1 participant