Skip to content

pnlbwh/HD-BET

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HD-BET

This repository provides easy to use access to our recently published HD-BET brain extraction tool. HD-BET is the result of a joint project between the Department of Neuroradiology at the Heidelberg University Hospital and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ).

If you are using HD-BET, please cite the following publication:

Isensee F, Schell M, Tursunova I, Brugnara G, Bonekamp D, Neuberger U, Wick A, Schlemmer HP, Heiland S, Wick W, Bendszus M, Maier-Hein KH, Kickingereder P. Automated brain extraction of multi-sequence MRI using artificial neural networks. Hum Brain Mapp. 2019; 1–13. https://doi.org/10.1002/hbm.24750

Compared to other commonly used brain extraction tools, HD-BET has some significant advantages:

  • HD-BET was developed with MRI-data from a large multicentric clinical trial in adult brain tumor patients acquired across 37 institutions in Europe and included a broad range of MR hardware and acquisition parameters, pathologies or treatment-induced tissue alterations. We used 2/3 of data for training and validation and 1/3 for testing. Moreover independent testing of HD-BET was performed in three public benchmark datasets (NFBS, LPBA40 and CC-359).
  • HD-BET was trained with precontrast T1-w, postcontrast T1-w, T2-w and FLAIR sequences. It can perform independent brain extraction on various different MRI sequences and is not restricted to precontrast T1-weighted (T1-w) sequences. Other MRI sequences may work as well (just give it a try!)
  • HD-BET was designed to be robust with respect to brain tumors, lesions and resection cavities as well as different MRI scanner hardware and acquisition parameters.
  • HD-BET outperformed five publicly available brain extraction algorithms (FSL BET, AFNI 3DSkullStrip, Brainsuite BSE, ROBEX and BEaST) across all datasets and yielded median improvements of +1.33 to +2.63 points for the DICE coefficient and -0.80 to -2.75 mm for the Hausdorff distance (Bonferroni-adjusted p<0.001).
  • HD-BET is very fast on GPU with <10s run time per MRI sequence. Even on CPU it is not slower than other commonly used tools.

Installation Instructions

Psychiatry Neuroimaging Laboratory has redefined the installation scheme for this program to work on all modern GPUs: RTX 4080, RTX 4090, RTX A6000, A100, and GTX 1080. This redefinition came out of days of research at Psychiatry Neuroimaging Laboratory, Brigham and Women's Hospital, Boston, Massachusetts. The steps are:

conda create -y -n hd-bet python=3.6
conda activate hd-bet

git clone --single-branch --branch pnl git@github.com:pnlbwh/HD-BET.git
cd HD-BET/
pip install .

conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia

The last pytorch-cuda=12.1 makes it possible to run hd-bet on our GPUs that are CUDA v12.* compatible. Both channels -c pytorch -c nvidia are necessary to install dependencies of pytorch-cuda=12.1

After completing installation, you can invoke hd-bet by absolute path without sourcing or activating any environment:

/path/to/miniconda3/envs/hd-bet/bin/hd-bet --help

References

How to use it

Using HD_BET is straightforward. You can use it in any terminal on your linux system. The hd-bet command was installed automatically. We provide CPU as well as GPU support. Running on GPU is a lot faster though and should always be preferred. Here is a minimalistic example of how you can use HD-BET (you need to be in the HD_BET directory)

hd-bet -i INPUT_FILENAME

INPUT_FILENAME must be a nifti (.nii.gz) file containing 3D MRI image data. 4D image sequences are not supported (however can be splitted upfront into the individual temporal volumes using fslsplit1). INPUT_FILENAME can be either a pre- or postcontrast T1-w, T2-w or FLAIR MRI sequence. Other modalities might work as well. Input images must match the orientation of standard MNI152 template! Use fslreorient2std 2 upfront to ensure that this is the case.

By default, HD-BET will run in GPU mode, use the parameters of all five models (which originate from a five-fold cross-validation), use test time data augmentation by mirroring along all axes and not do any postprocessing.

For batch processing it is faster to process an entire folder at once as this will mitigate the overhead of loading and initializing the model for each case:

hd-bet -i INPUT_FOLDER -o OUTPUT_FOLDER

The above command will look for all nifti files (*.nii.gz) in the INPUT_FOLDER and save the brain masks under the same name in OUTPUT_FOLDER.

GPU is nice, but I don't have one of those... What now?

HD-BET has CPU support. Running on CPU takes a lot longer though and you will need quite a bit of RAM. To run on CPU, we recommend you use the following command:

hd-bet -i INPUT_FOLDER -o OUTPUT_FOLDER -device cpu -mode fast -tta 0

This works of course also with just an input file:

hd-bet -i INPUT_FILENAME -device cpu -mode fast -tta 0

The options -mode fast and -tta 0 will disable test time data augmentation (speedup of 8x) and use only one model instead of an ensemble of five models for the prediction.

More options:

For more information, please refer to the help functionality:

hd-bet --help

FAQ

  1. How much GPU memory do I need to run HD-BET?
    We ran all our experiments on NVIDIA Titan X GPUs with 12 GB memory. For inference you will need less, but since inference in implemented by exploiting the fully convolutional nature of CNNs the amount of memory required depends on your image. Typical image should run with less than 4 GB of GPU memory consumption. If you run into out of memory problems please check the following: 1) Make sure the voxel spacing of your data is correct and 2) Ensure your MRI image only contains the head region
  2. Will you provide the training code as well?
    No. The training code is tightly wound around the data which we cannot make public.
  3. What run time can I expect on CPU/GPU?
    This depends on your MRI image size. Typical run times (preprocessing, postprocessing and resampling included) are just a couple of seconds for GPU and about 2 Minutes on CPU (using -tta 0 -mode fast)

1https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Fslutils

2https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Orientation%20Explained

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%