A framework to objectively evaluate the performance of machine learning algorithms in biomedical imaging
Clone or download
Latest commit a8cecda Nov 16, 2018

README.rst

grand-challenge.org

https://travis-ci.org/comic/grand-challenge.org.svg?branch=master Maintainability Documentation Status

Fair and objective comparisons of machine learning algorithms improves the quality of research outputs in both academia and industry. This repo contains the source code behind grand-challenge.org, which serves as a resource for users to compare algorithms in biomedical image analysis. This instance is maintained by developers at Radboud University Medical Center in Nijmegen, The Netherlands and Fraunhofer MeVis in Bremen, Germany, but you can also create your own instance.

This django powered website has been developed by the Consortium for Open Medical Image Computing. It features:

  • Creation and management of challenges
  • Easy creation of challenge sites with WYSIWYG editing
  • Fine grained permissions for challenge administrators and participants
  • Management and serving of datasets
  • Automated evaluation of predictions
  • Live leaderboards
  • User profiles and social authentication
  • Teams

If you would like to start your own website, or contribute to the development of the framework, please see the docs

Slack

You can join the development slack using this link.