Switch branches/tags
Find file History


Fast and Robust Reproduction of Multiple Brain Image Segmentation Pipelines

Example prediction on test data

Contact and referencing this work

If there are any issues please contact the corresponding author of this implementation. If you employ this model in your work, please refer to this citation of the paper.

  title={NeuroNet: Fast and Robust Reproduction of Multiple Brain Image Segmentation Pipelines},
  author={Martin Rajchl and Nick Pawlowski and Daniel Rueckert and Paul M. Matthews and Ben Glocker},
 booktitle={International conference on Medical Imaging with Deep Learning (MIDL)},


The data can be downloaded after registration from the UK Biobank Imaging Enhancement Study website.

Images and segmentations are read from a csv file in the format below. The original files (*.csv) is provided in this repo.

These are parsed and extract tf.Tensor examples for training and evaluation in reader.py using a SimpleITK for i/o of the .nii files.



  • parse_csvs.ipynb creates training/validation/testing .csv files from data paths and splits the subject ids into categories.

  • sandbox.ipynb visually assesses the outputs of the reader.py for a visual check of the inputs

  • eval.ipynb computes the visual and numerical results for the paper

  • reader.py dltk reader, containing the label mappings to and from consecutive ids and the python generator creating input tensors to the network, using a SimpleITK interface

  • train.py main training script to run all experiments with

  • deploy.py generic deploy script for all experiments

  • config*.json are configuration files to determine the dataset(s) to train on, scaling the flexible NeuroNet architecture and a few exposed training parameters.

  • *.csv csv files generated with parse_csvs.ipynb, containing the paths to all .nii image files

Data Preprocessing

We did not apply any data preprocessing, such as brain stripping or additional bias correction, etc. The input to the network is a single MNI registered 1mm isotropic T1-weighted MR image (as procude by the UK Biobank). Please refer to the UKB Neuroimaging documentation for additional information.


You can use the code (train.py) to train the model on the data yourself. Alternatively, we provide pretrained models from the paper here:

Depending on the model, the number of output volumes will correspond with the number of segmentation tasks (i.e. neuronet_single will produce one volume, neuronet_all will produce 5 segmentation volumes).

You can start a basic training with

python train.py -c CUDA_DEVICE --config MY_CONFIG

that will load the file paths from the previously created csvs, according to the config parameters.


To deploy a model and run inference, run the deploy.py script and point to the model save_path:

python deploy.py -p path/to/saved/model -c CUDA_DEVICE --config MY_CONFIG