Skip to content

HITLAB-DeepIGeoS/DeepIGeoS

Repository files navigation

🧠 DeepIGeoS

Implementation the DeepIGeoS Paper

📄 DeepIGeoS: A Deep Interactive Geodesic Framework for Medical Image Segmentation (2018)

additional page : Notion page (in Korean 🇰🇷)

👨🏻‍💻 Contributors

이영석, 이주호, 이준호, 정경중, 손소연, 조현우

🔍 Prerequisites

Please check environments and requirements before you start. If required, we recommend you to either upgrade versions or install them for smooth running.

Ubuntu Python PyTorch Qt

☺︎ Environments

Ubuntu 16.04
Python 3.7.11

☺︎ Requirements

dotmap
GeodisTK
opencv-python
tensorboard
torch
torchio
torchvision
tqdm
PyQt5

🎞️ Datasets

Download the BraTS 2021 dataset form web BraTS 2021 using load_datasets.sh.

$ bash load_datasets.sh

💻 Train

☺︎ P-Net

$ python train_pnet.py -c configs/config_pnet.json

☺︎ R-Net

$ python train_rnet.py -c configs/config_rnet.json

☺︎ Tensorboard

$ tensorboard --logdir experiments/logs/

💻 Run

☺︎ Simple QT Application

To operate DeepIGeos with simple mouse click interaction, We create a QT based application. You can just run DeepIGeoS with the main code titled 'main_deepigeos.py' as shown below.

$ python main_deepigeos.py

🧬 Results

☺︎ with the Simulated Interactions

After the simulated user interaction, the result follows rules below :

Simulations were generated on three slices with the largest mis-segments in each axes; sagittal, coronal, and axial.

  1. To find mis-segmented regions, the automatic segmentations by P-net are compared with the ground truth.
  2. Then the user interactions on the each mis-segmented region are simulated by n pixels, which is randomly sampled, in the region. (Suppose the size of one connected under-segmented or over-segmentedregion is Nm, we set n for that region to 0 if Nm < 30 and [Nm/100] otherwise)
Sagittal Coronal Axial
  • Yellow border : Ground Truth Mask
  • Green area : P-Net Prediction Mask
  • Red area : R-Net Refinement Mask

☺︎ with the User Interaction

Results with user interactions are below :

app_demo.mp4
  1. Click LOAD IMG button to load image
  2. Click P-NET button to operate automatic segmentation
  3. Check the automatic segmentation results
  4. Point where to refine results by mouse click
    • Circle : Under-segmented region
    • Square : Over-segmented region
  5. Click R-NET button to operate refinement
  6. Check refined segmentation results
pnet_mask rnet_mask gt_mask
  • Green Volume : P-Net Prediction Mask
  • Orange Volume : R-Net Refinement Mask
  • Blue Volume : Ground Truth Mask

📄 Background of DeepIGeoS

For the implementation of the DeepIGeoS paper, all steps, we understood and performed are described in the following subsections.

☺︎ Abstract

  • Consider a deep CNN-based interactive framework for the 2D and 3D medical image segmentation
  • Present a new way to combine user interactions with CNNs based on geodesic distance maps
  • Propose a resolution-preserving CNN structure which leads to a more detailed segmentation result compared with traditional CNNs with resolution loss
  • Extend the current RNN-based CRFs for segmentation so that the back-propagatable CRFs can use user interactions as hard constraints and all the parameters of potential functions can be trained in an end-to-end way.

☺︎ Architecture

Two stage pipeline : P-Net(obtains automatically initial segmentation) + R-Net(refines initial segmentation w/ small # of user interactions that we encode as geodesic distance maps)

  • P-Net : use CNN to obtain an initial automatic segmentation
  • R-Net: refine the segmentation by taking as input the original image, the initial segmentation and geodesic distance maps based on foreground/background user interactions.

☺︎ CRF-Net

The CRF-Net(f) is connected to P-Net and also the CRF-Net(fu) is connected to R-Net.

  • CRF-Net(f): extend CRF based on RNN so that the pairwise potentials can be freeform functions.
  • CRF-Net(fu): integrate user interactions in our CRF-Net(f) in the interactive refinement context.
  • But here CRF-Nets are not implemented for simplicity

☺︎ Geodesic Distance Maps

The interactions with the same label are converted into a distance map.

  • The euclidean distance treats each direction equally and it does not take the image context into account.
  • In contrast, the geodesic distance helps to better differentiate neighboring pixels with different appearances, and improves label consistency in homogeneous regions.

☺︎ BraTS Dataset

We only consider T2-FLAIR(panel C) images in the BraTS 2021 and segment the whole tumor.

  • The BraTS dataset describes a retrospective collection of brain tumor mpMRI scans acquired from multiple different institutions under standard clinical conditions, but with different equipment and imaging protocols, resulting in a vastly heterogeneous image quality reflecting diverse clinical practice across different institutions. Inclusion criteria comprised pathologically confirmed diagnosis and available MGMT promoter methylation status. These data have been updated, since BraTS 2020, increasing the total number of cases from 660 to 2,000.

About

Pytorch Implementation of DeepIGeoS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published