Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

OBELISK one binary extremely large and inflecting sparse kernel

(pytorch v1.0 implementation)

This repository contains code for the Medical Image Anaylsis (MIDL Special Issue) paper: OBELISK-Net: Fewer Layers to Solve 3D Multi-Organ Segmentation with Sparse Deformable Convolutions by Mattias P. Heinrich, Ozan Oktay, Nassim Bouteldja (winner of the MIDL 2018 best paper award)

The main idea of OBELISK is to learn a large spatially deformable filter kernel for (3D) image analysis. It replaces a conventional (say 5x5) convolution with

  1. trainable spatial filter offsets xy(z)-coordinates and
  2. a linear 1x1 convolution that contains the filter coefficients (values). During training OBELISK will adapt its receptive field to the given problem in a completely data-driven manner and thus automatically solve many tuning steps that are usually done by 'network engineering'. The OBELISK layers have substantially fewer trainable parameters than conventional CNNs used in 3D U-Nets and perform often better for medical segmentation tasks (see Table below).

The working principle (and the basis of its implementation) are visualised below. The idea is to replace the im2col operator heavily used in matrix-multiplication based convolution in many DL frameworks with a continuous off-grid grid_sample operator (available for 3D since pytorch v0.4). Please also have a look at if you're not familiar with im2col.


You will find many more details in the upcoming MEDIA paper or for now in the original MIDL version:

How to use this code: The easiest use-case is to first run the inference on the pre-processed TCIA multi-label data. You need: -dataset tcia -model obeliskhybrid -input pancreas_ct1.nii.gz -output mylabel_ct1.nii.gz

Note that the folds are defined as follows: fold 1 has not seen labels/scans #1-#10, fold 2 has not seen labels #11-#21 etc.

  • you can now visualise the outcome in ITK Snap or measure the Dice overlap of the pancreas with the manual segmentation
c3d label_ct1.nii.gz mylabel_ct1.nii.gz -overlap 2

which should return 0.783 and a visual segmentation like below

ITK visualisation of automatic segmentation

  • you can later train your own models using the function by providing the respective datafolders

Visual Overlay and Table from MEDIA preprint, demonstrating results on TCIA

Visual Overlay and Table from MEDIA preprint


MIDL 2018 / MEDIA 2019: one binary extremely large and inflecting sparse kernel (pytorch)




No releases published


No packages published


You can’t perform that action at this time.