Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle evaluations of sparse-labeled test matrices differently #116

Closed
thcrock opened this issue Apr 18, 2017 · 0 comments
Closed

Handle evaluations of sparse-labeled test matrices differently #116

thcrock opened this issue Apr 18, 2017 · 0 comments
Assignees
Milestone

Comments

@thcrock
Copy link
Contributor

thcrock commented Apr 18, 2017

When a test matrix is keyed on entity id and date, and have sparse labels, we want to be more deliberate about what the evaluations mean. We should:

  • Sort and threshold the predictions with NaNs intact, and then remove the NaNs.
  • Add to the evaluations table the number of labeled examples, number of labeled examples above threshold, and number of positive labels. This will help the researcher make sense of different metrics.
  • Add a false positive rate metric
@thcrock thcrock self-assigned this Apr 18, 2017
@thcrock thcrock added this to the v0.4 milestone Apr 18, 2017
@thcrock thcrock changed the title Handle evaluations of sparse test matrices differently Handle evaluations of sparse-labeled test matrices differently Apr 18, 2017
ecsalomon added a commit that referenced this issue Apr 19, 2017
Fix evaluation of sparsely-labeled test matrices [Resolves #116]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant