-
Notifications
You must be signed in to change notification settings - Fork 45.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Syntaxnet: Where is the baseline_eval.py #1211
Comments
Looks like that's a dangling reference -- we will fix that and add more up-to-date evaluation instructions. In the meantime, please see evaluator.py, segmenter-evaluator.py, and parser-to-conll.py in the dragnn/tools directory to run evaluations. |
The official CoNLL2017 evaluation script can be dowloaded from http://universaldependencies.org/conll17/evaluation.html. Note that unlike the above-mentioned scripts, it handles different tokenization and multi-word tokens. |
Hi @djweiss , I'm not able to use the I've reposted the error on a separate GH issue: #1355 (comment) It seems like the models might be mismatched with the public eval code. Or maybe I'm doing something wrong in the invocation? I appreciate your help, thanks! |
I am wondering where the
baseline_eval.py
mentioned in the CoNLL 2017 document is.I searched through the repo, but couldn't find it.
The text was updated successfully, but these errors were encountered: