Skip to content

Implementation of the paper "Neural Orientation Distribution Fields for Estimation and Uncertainty Quantification in Diffusion MRI" with the additional HashEnc method.

Notifications You must be signed in to change notification settings

MunzerDw/NODF-HashEnc

Repository files navigation

Estimating Neural Orientation Distribution Fields for High Resolution Diffusion MRI Scans

This code repository is an implementation of the paper Neural orientation distribution fields for estimation and uncertainty quantification in diffusion MRI (https://www.sciencedirect.com/science/article/abs/pii/S1361841524000306) for 2 different kinds of implicit networks.

Abstract

The Orientation Distribution Function (ODF) characterizes key brain microstructural properties and plays an important role in understanding brain structural connectivity. Recent works introduced Implicit Neural Representation (INR) based approaches to form a spatially aware continuous estimate of the ODF field and demonstrated promising results in key tasks of interest when compared to conventional discrete approaches. However, traditional INR methods face difficulties when scaling to large- scale images, such as modern ultra-high-resolution MRI scans, posing challenges in learning fine structures as well as inefficiencies in training and inference speed. In this work, we propose HashEnc, a grid-hash-encoding-based estimation of the ODF field and demonstrate its effectiveness in retaining structural and textural features. We show that HashEnc achieves a 10 % enhancement in image quality while requiring 3x less computational resources than current methods.

From: Technical University of Munich and Harvard Medical School

Setup

Environment requirements

  • CUDA 11.X
  • Python 3.8

Instal the requirements using conda:

    conda env create --name nodf --file=environment.yml
    conda activate nodf

Dataset

We trained our model using the publicly available dataset from In vivo human whole-brain Connectom diffusion MRI dataset at 760 µm isotropic resolution (https://www.nature.com/articles/s41597-021-00904-z)

You require the following files:

  • signal.nii.gz: (X, Y, Z, M) raw MRI signal
  • bval.txt: b-values written in one line separated by a space
  • bvec.txt: b-vectors each written vertically
  • mask.nii.gz: (X, Y, Z) [0,1] mask to select the whole brain region/ region of interest
  • (optional) gt_odfs.pt: (N, K) torch tensor of ground truth ODFs used to calculate GFA and ODF L2 errors. N is the number of all voxels of the brain (non-void voxels)
  • (optional) gt_gfa.nii.gz: (X, Y, Z) ground truth GFA - can be create using evaluate.py with the gt_odfs.pt as the predictions, if available
  • (optional) gt_dti.nii.gz: (X, Y, Z, 3) ground truth DTI - can be create using evaluate.py with the gt_odfs.pt as the predictions, if available

Put these files under the data folder. If you have these files under a different folder, you can set the path to it using this flag: --data <path to your data folder>

Usage

If you want to use pre-trained models, please download them from the section below.

Predicting

    python predict.py --ckpt_path <path to pytorch lightning .ckpt file>

The predictions can be found under output/<experiment>/predictions

Evaluation

To get GFA and DTI images as well as calculate the FSIM and ODF L2-Norm scores:

    python evaluate.py --device cpu

This will directly use the pointwise_estimates.pt file under output/<experiment>/predictions. The evaluations output can be found under output/<experiment>/evaluations.

Training

To train HashEnc:

    python train.py

To train a large SIREN network:

    python train.py --experiment_name baseline --depth 10 --r 1024 --learning_rate 1e-6 --use_baseline

To train HashEnc with Total Variation (TV) regularization:

    python train.py --use_tv

Use --lambda_tv <your value> to set the total variation regularization strength.

MPPCA

To denoise the dMRI image with MPPCA, use the following command:

    python utils/mppca.py --experiment_name mppca

It will output a nifti image file of the dMRI scan with reduced noise.

SHLS

To get the ODFs from the dMRI image with SHLS, use the following command:

    python utils/shls.py --out_folder <add output path here>

It will output a tensor of the estimated ODFs spherical harmonic coefficients.

Pretrained Models

Model Comment
HashEnc HashEnc trained as above with default network configuration
SIREN Large SIREN network trained as above

Visualization

To visualize the deconvolved ODFs:

python visualize.py

The deconvolved ODFs can be found under output/<experiment>/visualization

Logging

You can use tensorboard to check losses and accuracies by visiting localhost:6006 after running:

tensorboard --logdir output

Citation

If you find our work helpful, please kindly cite the original NODF paper:

@article{consagra2024nodf,
	title = {Neural orientation distribution fields for estimation and uncertainty quantification in diffusion MRI},
	journal = {Medical Image Analysis},
	volume = {93},
	year = {2024},
	issn = {1361-8415},
	doi = {https://doi.org/10.1016/j.media.2024.103105},
	url = {https://www.sciencedirect.com/science/article/pii/S1361841524000306},
	author = {William Consagra and Lipeng Ning and Yogesh Rathi},
	keywords = {Uncertainty quantification, Deep learning, Neural field, Diffusion MRI, Functional data analysis},
}

and ours:

TODO

About

Implementation of the paper "Neural Orientation Distribution Fields for Estimation and Uncertainty Quantification in Diffusion MRI" with the additional HashEnc method.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published