Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GSoC'21] Ability to measure the quality of a CerebUnit's Validation test. #1

Merged
merged 2 commits into from Aug 20, 2021

Conversation

HarshKhilawala
Copy link
Contributor

Google Summer Of Code 2021 @ INCF

Project: Measure the Quality of CerebUnit Validation Tests

Contributions towards the project:

  • Develop MockData submodule as part of CerebStats to generate mock data files for running against validation tests using a random set of values.

MockData Class contains the following methods:

Method Name Method Type
count_files static method
display_files static method
clear_files static method
generate_random_data_files static method
  • Develop TestMetrics submodule as part of CerebStats to calculate metrics such as TP, FP, TN, FN for calculation of Specificity, Sensitivity, Positive Predictive Value, and Negative Predictive Value.

TestMetrics Class contains the following methods:

Method name Method type
calculate_metrics instance method
get_specificity instance method
get_sensitivity instance method
get_npv instance method
get_ppv instance method
display_outcomes instance method
display_metrics instance method
  • Add documentation for MockData Class
  • Add documentation for TestMetrics Class

Link to CerebStats Documentation: CerebStats Documentation

lungsi added a commit that referenced this pull request Aug 20, 2021
@lungsi lungsi merged commit 7d181fc into cerebunit:master Aug 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants