Skip to content
This repository has been archived by the owner on Jun 22, 2022. It is now read-only.

UNet with weighted loss and morphological postprocessing

kamil-kaczmarek edited this page Mar 7, 2018 · 2 revisions

Overview

Fourth solution introduced three ideas to the computing pipeline: weighted loss, morphological post-processing and fourth U-Net output contour_touching.

Weighted Loss

Weighted loss is a concept, where we assign weight to each value of the loss function. Function operated on each output. Implementation is straightforward (models.py:L38):

loss_function = [('mask',             segmentation_loss, 0.3),
                 ('contour',          segmentation_loss, 0.5),
                 ('contour_touching', segmentation_loss, 0.1),
                 ('center',           segmentation_loss, 0.1)]

where:

  • weights sum to 1.0
  • segmentation_loss is loss function, (segmentation_loss(output, target))

We think that contours are most important to the model performance, so we assign highest weight to it. Similarly, we didn't observe crucial role of the center, so we decrease its importance.

Morphological Post-Processing

We have created procedure that runs on the predictions. It uses several morphological transformations such as erosion, dilation and watershed to make final masks better. Implementation is available here: postprocessing.py:L127.

Auxiliary target: Touching Contours

Here, we create masks with non-zero values where nuclei overlap. This is additional auxiliary target to learn. Implementation, here: preparation.py:L60

Run default experiment

Run command:

$ neptune login
$ neptune send main.py --worker gcp-gpu-large --environment pytorch-0.2.0-gpu-py3 -- train_evaluate_predict_pipeline --pipeline_name unet_multitask

When training is completed, collect Kaggle submit from: /output/dsb/experiments/submission.csv.