Precise and Rapid Whole-Head Segmentation from Magnetic Resonance Images of Older Adults using Deep Learning
We provide open-source code of a pipeline called General, Rapid, And Comprehensive whole-hEad tissue segmentation, nicknamed GRACE. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. GRACE segments a spectrum of tissue types from older adults T1-MRI scans at favorable accuracy and speed. This segmentation only requires the input T1 MRI and does not require special preprocessing in neuroimaging software. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders.
This repository provides the official implementation of training GRACE as well as the usage of the model GRACE in the following paper:
Precise and Rapid Whole-Head Segmentation from Magnetic Resonance Images of Older Adults using Deep Learning
Skylar E. Stolte1, Aprinda Indahlastari2,3, Jason Chen4, Alejandro Albizu2,5, Ayden Dunn3, Samantha Pederson3, Kyle B. See1, Adam J. Woods2,3,5, and Ruogu Fang1,2,6,*
1 J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida (UF), USA
2 Center for Cognitive Aging and Memory, McKnight Brain Institute, UF, USA
3 Department of Clinical and Health Psychology, College of Public Health and Health Professions, UF, USA
4 Department of Computer & Information Science & Engineering, Herbert Wertheim College of Engineering, University of Florida (UF), USA
5 Department of Neuroscience, College of Medicine, UF, USA
6 Department of Electrical and Computer Engineering, Herbert Wertheim College ofEngineering, UF, USA
Imaging NeuroScience
paper | code |
- Our GRACE segments 11 tissues from T1 MRIs of the human head with high accuracy and fast processing speed.
- GRACE contains its own preprocessing pipeline and does not require the input to be preprocessed in other neuroimaging tools.
- GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36.
- A representative GRACE model is available from this GITHUB. This model may be particularly useful to those who need segmentations of MRIs on older adult heads.
Our pretrained model can be found at the v1.0.0 release of this GitHub or at either of the following two links:
-
Google Drive: https://drive.google.com/file/d/1C_oidhimoReTV_learRM4-6SX4EEPidu/view?usp=sharing
-
Dropbox: https://www.dropbox.com/scl/fi/8i75chjrs0alpfpb7ni3j/GRACE.pth?rlkey=jx8ltbinher52r0cfly3pm3ns&dl=0
You can find there are two MATLAB codes, you can directly change the directory to your own data. You need to select the GRACE working folder and add to path before you running these two MATLAB codes.
To run the combineIn case of you are using different version of MATLAB, if you are using MATLAB 2020b, you need to change line 56 to :
image(index) = tissue_cond_updated.Labels(k)
Then you can run the combine_mask.m. The output should be a Data folder with the following structure:
Data ImagesTr sub-TrX_T1.nii sub-TrXX_T1.nii ...
ImagesTs sub-TsX_T1.nii sub-TsXX_T1.nii ...
LabelsTr sub-TrX_seg.nii sub-TrXX_seg.nii ...
LabelsTs sub-TsX_seg.nii sub-TsX_seg.nii ...
Maneuver to the /your_data/Data/. Run make_datalist_json.m
After this code is done, you may exit MATLAB and open the terminal to run the other codes.
The GRACE code uses the MONAI, an open-source foundation. We provide a .sh script to help you to build your own container for running your code.
Run the following code in the terminal, you need to change the line after --sandbox to your desired writable directory and change the line after --nv to your own directory.
./build_container_v08.sh
The output should be a folder named monaicore08 under your desired directory.
Once the data and the container are ready, you can train the model by using the following command:
./train.sh
Before you training the model, you need to make sure change the following directory:
- change the first singularity exec -nv to the directory includes monaicore08, for example: /user/GRACE/monaicore08
- change the line after --bind to the directory includes monaicore08
- change the data_dir to your data directory
- change the model name to your desired model name You can also specify the max iteration number for training. For the iterations = 100, the training progress might take about one hours, and for the iterations = 25,000, the training progress might take about 24 hours.
The test progress is very similar to the training progress. You need to change all paths and make sure the model_save_name matches your model name in runMONAI.sh. Then running the runMONAI_test.sh with the following command:
./test.sh
The outputs for each test subject is saved as a mat file.
There is an additional code to convert the .mat files to .nii files under the directory /mat_to_nii. You can run this by adding the list of files that you would like to convert to /mat_to_nii/main.py under the variable "FILES". Then you can run this python code to perform the conversion. There are also codes to interconvert from .nii to .raw depending on your needs. These files are available in /Nii_Raw_Interconversion as either PYTHON or MATLAB scripts.
The code for visualizing your results is available at /Visualization Code. Open the file main_v2.py and add your image ID names to the variable SUBLIST. You also need to enter the file paths for each entry in SUBLIST following the example below the variable. You can leave all subject entries as empty quotes ('') other than the T1 and GRACE if you are only running this coding repository. Run main_v2.py after performing these edits.
If you use this code, please cite our papers:
@InProceedings{stolte2024,
author="Stolte, Skylar E. and Indahlastari, Aprinda and Chen, Jason and Albizu, Alejandro and Dunn, Ayden and Pederson, Samantha and See, Kyle B. and Woods, Adam J. and Fang, Ruogu",
title="Precise and Rapid Whole-Head Segmentation from Magnetic Resonance Images of Older Adults using Deep Learning",
booktitle="Imaging NeuroScience",
year="2024",
url="TBD"
}
This work was supported by the National Institutes of Health/National Institute on Aging (NIA RF1AG071469, NIA R01AG054077), the National Science Foundation (1842473, 1908299, 2123809), the NSF-AFRL INTERN Supplement (2130885), the University of Florida McKnight Brain Institute, the University of Florida Center for Cognitive Aging and Memory, and the McKnight Brain Research Foundation. We acknowledge the NVIDIA AI Technology Center (NVAITC) for their suggestions to this work.
We employ UNETR as our base model from: https://github.com/Project-MONAI/research-contributions/tree/main/UNETR
Any discussion, suggestions and questions please contact: Skylar Stolte, Dr. Ruogu Fang.
Smart Medical Informatics Learning & Evaluation Laboratory, Dept. of Biomedical Engineering, University of Florida