Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Evaluation Pipeline for Models #271
As discussed in #131 it would be helpful to have a consistent pipeline to evaluate prediction models. This way we get to know how well the currently implemented models are, which ones need to be improved and how well a new model performs. The pipeline should calculate the appropriate metrics that have been specified in #221 while some of the are already available here.
Please refer to the PR template for further explanations of the metrics.
For the both identification and classification (false positive reduction) tasks was proposed a handy evaluation framework by the LUNA16 authors.