Code for MICCAI 2017 paper on binary sparse convolutions for semantic segmentation of medical images
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
models
BRIEF.m
LICENSE.md
MICCAI2017_mycopy.pdf
Mult.m
README.md
SegmentationLoss.m
apply_model.m
boundingbox_abdomen15.mat
crop_scans.m
dice1.m
eigen.tar.gz
extract_brief.m
overlayparula.m
postProcessRegularise.cpp
prepare_data_individual_scan.m

README.md

BRIEFnet

Code for MICCAI 2017 paper "BRIEFnet: Deep Pancreas Segmentation using Sparse Dilated Convolutions"

by Mattias P. Heinrich and Ozan Oktay.

Please see http://mpheinrich.de for PDF and more details

Prerequisites to run example

1) Create free synapse.org account and download pancreas dataset

"Beyond the Cranial Vault" MICCAI workshop (https://www.synapse.org/#!Synapse:syn3193805/wiki/89480)

Log in, click on 'Files' and select 'Abdomen' (or go to https://www.synapse.org/#!Synapse:syn3376386) you will only need to download RawData.zip (which has a size of 1.53 GBytes and contains 30 scans) and subsequently extract the files. Please note, that the files are not consecutively named (11-20 are missing) - which will be fixed in step 2.

2) Open a MATLAB instance and preprocess data

In case you have not used nifti files in MATLAB before install the toolbox of Jimmy Shen: http://de.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze-image

Once you have obtained the training data, it should be cropped using the provided bounding boxes (boundingboxes_abdomen15.mat). Run the script the following way:

bbox=load('boundingbox_abdomen15.mat');
crop_scans(bbox,in_folder,out_folder); 

providing input folder (i.e. the one you extracted the training folder of RawData.zip to) and an output folder (here 'pancreas'). This will generate 30 scans and corresponding (binary) segmentations of sizes 124x84x94. This may take more than 1 minute.

3) Install and compile MatConvNet

https://github.com/vlfeat/matconvnet

Follow the guide to compile the toolbox at http://www.vlfeat.org/matconvnet/install/. When applying the trained models the CPU is sufficient, otherwise a GPU is recommended. Make sure MatConvNet is working and its paths are set within MATLAB. Add the following two custom files for BRIEFnet into the folder /matlab/+dagnn/

Mult.m and BRIEF.m

Finally, you need to extract the eigenlibrary files, which are used for the edge-preserving smoothing:

unix('tar zxf eigen.tar.gz');

4) Load a trained BRIEFnet model and apply it to a scan (#7)

We have split the cross-validation in 6 folds of 25 training images. Fold 1: #6-#30, Fold 2: #1-#5 and #11-#30 etc. For testing #7, we are using the following commands

S=load('briefnet_fold2.mat');
netDAG=dagnn.DagNN.loadobj(S.model);
[imCoarse,imLocal,img1]=prepare_data_individual_scan('pancreas/img7_res.nii.gz');
[probabilities,segmentation]=apply_model(netDAG,imCoarse,imLocal,img1);

We can evaluate the quality of the segmentation obtained with BRIEFnet, by calculating the Dice overlap (which should be 60%) with the ground truth and visualise an overlay using:

segmentTestGT=load_untouch_nii(['pancreas/seg7_res.nii.gz']); segmentTestGT=segmentTestGT.img;
dice1(segmentation,segmentTestGT)
fused=overlayparula(probabilities(:,:,46),img1(:,:,46));
figure; imshow(flip(permute(fused,[2,1,3]),1));

TODO: how to train a model with your own data.

If you find any problems or need help, feel free to contact me at lastname @ uni-luebeck.de

Mattias Heinrich