Skip to content

Reference implementation of domain-adversarial training on a RetinaNet (to use as template in the MIDOG challenge)

Notifications You must be signed in to change notification settings

DeepMicroscopy/MIDOG_reference_docker

Repository files navigation

Docker image of reference algorithm for MIDOG 2022 challenge.

Credits: F. Wilm, K. Breininger, M. Aubreville

This docker image contains a reference implementation of a domain-adversarial training based on RetinaNet, provided by Frauke Wilm (Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany) for the MIDOG challenge.

The container shall serve as an example of how we (and the grand-challenge plattform) expect the outputs to look like. At the same time, it serves as a template for you to implement your own algorithm for submission at MIDOG 2022.

Please note that the MIDOG 2022 docker reference container has changed from the MIDOG 2021 reference container. Main differences are:

  • Changed output format (see 2), enabling calculation of mAP as additional metric.
  • Updated paths in process.py and test.sh/test.bat to comply with grand-challenge.org's new interface for MIDOG 2022.

You will have to provide all files to run your model in a docker container. This example may be of help for this. We also provide a quick explanation of how the container works here.

For reference, you may also want to read the blog post of grand-challenge.org on how to create an algorithm.

Content:

  1. Prerequisites
  2. An overview of the structure of this example
  3. Packing your algorithm into a docker container image
  4. Building your container
  5. Testing your container
  6. Generating the bundle for uploading your algorithm

1. Prerequisites

The container is based on docker, so you need to install docker first.

Second, you need to clone this repository:

git clone https://github.com/DeepPathology/MIDOG_reference_docker

You will also need evalutils (provided by grand-challenge):

pip install evalutils

Optional: If you want to have GPU support for local testing, you want to install the NVIDIA container toolkit

As stated by the grand-challenge team:

Windows tip: It is highly recommended to install Windows Subsystem for Linux (WSL) to work with Docker on a Linux environment within Windows. Please make sure to install WSL 2 by following the instructions on the same page. In this tutorial, we have used WSL 2 with Ubuntu 18.04 LTS. Also, note that the basic version of WSL 2 does not come with GPU support. Please watch the official tutorial by Microsoft on installing WSL 2 with GPU support. The alternative is to work purely out of Ubuntu, or any other flavor of Linux.

2. An overview of the structure of this example

This example is a RetinaNet implementation, extended by a domain-adversarial branch.

  • The main processing (inference) is done in the file detection.py. It provides the class MyMitosisDetection, which loads the model and provides the method process_image() that takes an individual test image as numpy array as an input and returns the detections on said image.
  • The main file that is executed by the container is process.py. It imports and instanciates the model (MyMitosisDetection). It then loads all images that are part of the test set and processes each of them (using the process_image() method). As post-processing, it will also perform a final non-maxima suppression on the image, before creating the return dictionary which contains all individual detected points, which are ultimately stored in the file /output/mitotic-figures.json.

The output file is a dictionary (each input file is processed independently), and has the following format:

{
    "type": "Multiple points",
    "points": [
        {
            "point": [
                0.14647372756903898,
                0.1580733550628604,
                0,
            ],
            "probability" : 0.534,
            "name" : "mitotic figure",
        },
        {
            "point": [
                0.11008273935312868,
                0.03707331924495862,
                0,
            ]
            "probability" : 0.302839283,
            "name" : "non-mitotic figure",
        }
    ],
    "version": {
        "major": 1,
        "minor": 0
    }
}

Note that each point is described by the following dictionary:

image

The field "name" is used to distinguish between above threshold detections and below threshold detections. Please make sure that you find a suitable detection threshold. The below threshold detections are part of the output to calculate the average precision metric.

Caution: This has changed from the MIDOG 2021 docker container and also from earlier versions of this container. If you provide the old format, the evaluation will still work, but will not give you sensible values for the AP metric.

3. Embedding your algorithm into an algorithm docker container

We encourage you to adapt this example to your needs and insert your mitosis detection solution. You can adapt the code, remove & code files as needed and adapt parameters, thresholds and other aspects. As discussed above, the main file that is executed by the container is process.py. Here, we have marked the most relevant code lines with TODO.

To test this container locally without a docker container, you may set the execute_in_docker flag to false - this sets all paths to relative paths. Don't forget to set it back to true when you want to switch back to the docker container setting.

If you need a different base image to build your container (e.g., Tensorflow instead of Pytorch, or a different version), if you need additional libraries and to make sure that all source files (and weights) are copied to the docker container, you will have to adapt the Dockerfile and the requirements.txt file accordingly.

Kindly refer to the image below to identify the relevant points: dockerfile_img

4. Building your container

To test if all dependencies are met, you should run the file build.bat (Windows) / build.sh (Linux) to build the docker container. Please note that the next step (testing the container) also runs a build, so this step is not mandatory if you are certain that everything is set up correctly.

5. Testing your container

To test your container, you should run test.bat (on Windows) or test.sh (on Linux, might require sudo priviledges). This will run the test image(s) provided in the test folder through your model. It will check them against what you provide in test/expected_output.json. Be aware that this will, of course, initially not be equal to the demo detections we put there for testing our reference model.

6. Generating the bundle for uploading your algorithm

Finally, you need to run the export.sh (Linux) or export.bat script to package your docker image. This step creates a file with the extension "tar.gz", which you can then upload to grand-challenge to submit your algorithm.

7. Creating an "Algorithm" on GrandChallenge and submitting your solution to the MIDOG Challenge

** Note: Submission to grand-challenge.org will open on August 5th. **

In order to submit your docker container, you first have to add an Algorithm entry for your docker container [here] https://midog2022.grand-challenge.org/evaluation/challenge/algorithms/create/.

Please enter a name for the algorithm:

image

After saving, you can add your docker container (you can also overwrite your container here):

uploadcontainer

Please note that it can take a while (several minutes) until the container becomes active. You can determine which one is active in the same dialog:

containeractive

You can also try out your algorithm. Please note that you will require an image that has the DPI property set in order to use this function. You can use the image test/007.tiff provided as part of this container as test image (it contains mitotic figures).

tryout

Finally, you can submit your docker container to MIDOG:

submit_container

General remarks

  • The training is not done as part of the docker container, so please make sure that you only run inference within the container.
  • This image was trained on MIDOG 2021, which had only human breast cancer scanned with various scanners. Do not expect it to have a superb performance on the test sets.
  • The official manuscipt has a typo in Table 1. On the XR scanner, the reference approach scored an F1-Score of 0.7826 and thereby outperformed the strong baseline on this scanner. The value was corrected in the manuscript version on arXiv.

About

Reference implementation of domain-adversarial training on a RetinaNet (to use as template in the MIDOG challenge)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published