Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Syntaxnet: Where is the baseline_eval.py #1211

Closed
felixgwu opened this issue Mar 19, 2017 · 4 comments
Closed

Syntaxnet: Where is the baseline_eval.py #1211

felixgwu opened this issue Mar 19, 2017 · 4 comments
Assignees
Labels
stat:awaiting model gardener Waiting on input from TensorFlow model gardener

Comments

@felixgwu
Copy link

I am wondering where the baseline_eval.py mentioned in the CoNLL 2017 document is.
I searched through the repo, but couldn't find it.

@concretevitamin
Copy link

/cc @calberti @bogatyy

@concretevitamin concretevitamin added the stat:awaiting model gardener Waiting on input from TensorFlow model gardener label Mar 19, 2017
@djweiss
Copy link

djweiss commented Mar 20, 2017

Looks like that's a dangling reference -- we will fix that and add more up-to-date evaluation instructions. In the meantime, please see evaluator.py, segmenter-evaluator.py, and parser-to-conll.py in the dragnn/tools directory to run evaluations.

@concretevitamin concretevitamin added stat:awaiting model gardener Waiting on input from TensorFlow model gardener and removed stat:awaiting model gardener Waiting on input from TensorFlow model gardener labels Mar 20, 2017
@martinpopel
Copy link

The official CoNLL2017 evaluation script can be dowloaded from http://universaldependencies.org/conll17/evaluation.html. Note that unlike the above-mentioned scripts, it handles different tokenization and multi-word tokens.

@hans
Copy link

hans commented Apr 21, 2017

Hi @djweiss , I'm not able to use the evaluator.py script with the distributed CoNLL 2017 baseline model. (I've tried loading the English and Spanish models; no difference.)

I've reposted the error on a separate GH issue: #1355 (comment)

It seems like the models might be mismatched with the public eval code. Or maybe I'm doing something wrong in the invocation?

I appreciate your help, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stat:awaiting model gardener Waiting on input from TensorFlow model gardener
Projects
None yet
Development

No branches or pull requests

6 participants