Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation accuracy #60

Closed
DaDaPi3 opened this issue Jun 28, 2017 · 2 comments
Closed

Validation accuracy #60

DaDaPi3 opened this issue Jun 28, 2017 · 2 comments

Comments

@DaDaPi3
Copy link

DaDaPi3 commented Jun 28, 2017

Validation accuracy in the score.json is different from it in the log.txt/stdout.txt.

BTW, the avg_accuracy in the validation_eval.py should be called overall_accuracy.

@DaDaPi3 DaDaPi3 closed this as completed Jun 28, 2017
@DaDaPi3 DaDaPi3 reopened this Jun 28, 2017
@DaDaPi3
Copy link
Author

DaDaPi3 commented Jun 29, 2017

Seems the validation accuracy in the score.json file is the accuracy evaluated on the no_boundary ground truth, while the validation accuracy in the log.txt file is the accuracy evaluated on the full reference.

@lewfish
Copy link
Contributor

lewfish commented Jul 25, 2017

This is true. Ideally the numbers would be the same, but I don't think this should cause any problems as long as you are aware that the numbers are computed differently. If you want to submit a PR for this, feel free.

@lewfish lewfish closed this as completed Oct 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants