Skip to content

enric1994/cut-seg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Joint one-sided synthetic unpaired image translation and segmentation for colorectal cancer prevention

Abstract

Deep learning has shown excellent performance in analysing medical images. However, datasets are difficult to obtain due privacy issues, standardization problems, and lack of annotations. We address these problems by producing realistic synthetic images using a combination of 3D technologies and generative adversarial networks. We propose CUT-seg, a joint training where a segmentation model and a generative model are jointly trained to produce realistic images while learning to segment polyps. We take advantage of recent one-sided translation models because they use significantly less memory, allowing us to add a segmentation model in the training loop. CUT-seg performs better, is computationally less expensive, and requires less real images than other memory-intensive image translation approaches that require two stage training. Promising results are achieved on five real polyp segmentation datasets using only one real image and zero real annotations. As a part of this study we release Synth-Colon, an entirely synthetic dataset that includes 20000 realistic colon images and additional details about depth and 3D geometry: https://enric1994.github.io/synth-colon.

Setup

  • In order to run our code Docker, Docker-Compose and NVIDIA-Docker are required.

  • hardnet68.pth pre-trained weights for HarDNet68 can be found here. Place it in the root folder.

  • Required datasets can be found here: train (402MB), test (327MB), Synth-Colon (41GB). Edit docker/docker-compose.yml with the location of your data. It should have the following structure: polyp-data/{TrainDataset, TestDataset}

  • For consistency, rename all the image and mask folders in TrainDataset and TestDataset to images and masks.

Training

  1. Run docker-compose up -d in the docker/ folder. It will start a container with all the dependencies installed.

  2. Open the container CLI docker exec -it cut bash

  3. Prepare datasets: python /cut/data/prepare_datasets_finetuning.py. This step will take around 10 minutes.

  4. Train on the Kvasir dataset:

python train.py \
--dataroot /cut/datasets/finetune_Kvasir \
--name experiment_name \
--lr 0.00001 \
--batch_size 4 \
--n_epochs 50 \
--S_loss_weight 0.001
  1. Test your best checkpoint on all datasets: python test.py experiment_name

Logging

Images are saved in checkpoints. If you want to visualize the images and losses in Weights & Biases use the argument: --wandb online.

References