1. An introduction to MRH-Net
MRH-Net (for magnetic resonance histology-Net) is a tool developed to transform mouse brain magnetic resonance imaging (MRI) data into maps of specific cellular structures (e.g. axon and myelin).
Modern MRI provides unparalleled tissue contrasts for visualizing brain structures and functions non-invasively. To this date, people are still actively developing new MRI contrasts, or markers, for detecting pathological changes in the brain. However, the sensitivity and specificity of MRI markers to cellular structures, especially under pathological conditions, are largely unknown due to lack of direct links between MRI signals and cellular structures. This is quite different from histology, which, over the past few decades, has developed multiple stainings that tag cellular structures with high specificity. For example, people can use antibodies specifically bind to neurofilament (NF) and myelin basic protein (MBP) to stain axon and myelin, respectively. These stainings are common tools for neurobiologists to study disease mechanisms and potential treatments using mouse models, but the procedures to acquire histological images are invasive and time-consuming. We hope that MRH-Net will assist neurobiologists to take the advantage of modern MRI techniques by providing them easy-to-understand maps of key cellular structures with the highest possible sensitivity and specificity.
MRH-Net are deep convolutional neural networks (CNNs) trained using co-registered histology and MRI data to infer target cellular structures from multi-contrast MRI signals. It was designed for the following tasks:
- To transform MRI data to images of cellular structures with contrasts that mimic target histology.
- To enhance the sensitivity and specificity of MRI to specific cellular structures.
Here, you will find our trained MRH-Nets, their source codes, our mouse brain MRI datasets used for the training and testing and the acquisition parameters. The datasets have been carefully registered to mouse brain images from the Allen Mouse Brain Atlas (https://mouse.brain-map.org).
MRH-Net was designed based on three assumptions:
- The relationship between MRI signals and target cellular structures is local, so that the signals at each pixel is a realization/instance of the relationship between cellular structures and MRI signals. The millions of pixels in each MRI dataset thus provide sufficient data to train MRH-Nets.
- The multi-contrast MRI signals are sensitive to the presence of taget cellular structures. Different MRI contrasts are sensitive to distinct aspects of a particular structure (e.g. diffusion MRI is sensitive to restrictive effects of cell membrane and myelin sheath), multi-contrast MRI signals can potentially help MRH-Net and improve the sensitivity and specificity.
- Deep CNNs can accurately infer the distribution of target cellular structure from multi-contrast MRI signals.
While the first assumption is true for most MRI data, it is often difficult to know whether the last two assumptions are met. In our experiments, the MRI protocol includes T2, magnetization transfer (MT), and diffusion MRI, which have been shown to be sensitive to axon and myelin. The results generated by MRH-Net based on multi-contrast MRI data demonstrated remarkable similarities with NF and MBP stained reference histology, and the sensitivity and specificity of the results were higher than any single MRI marker.
The workflow of MRH-Net
Fig. 1: The workflow of MRH-Net. The basic network was trained using co-registered 3D MRI and autofluorescence (AF) data of adult C57BL mouse brains. The AF dataset contains data from 100 subjects from the Allen Brain Connectivity Project (https://connectivity.brain-map.org/). The MRI data contains multi-contrast MRI data (T2, magnetization transfer, and diffusion-weighted) from 6 post-mortem mouse brains. The network, MRH-AF, trained using the data, can then take new MRI data acquired using the same protocol to generate 3D pseudo-AF data. For neurofilament (NF) and myelin basic protein (MBP) data, the Allen reference dataset only contains single subject data (http://connectivity.brain-map.org/static/referencedata). We registered these histological images to MRI data and used transfer learning methods to generate new networks based on the existing MRH-AF network. The resulting MRH-NF and MRH-MBP networks are intended to translate multi-contrast MRI data to pseudo-NF and MBP images.
2. How to use MRH-Net?
Below are several scenarios that MRH-Net and associated resources may be used.
- Use our deep learning networks to transform mouse brain MRI data (acquired using same acquisition parameters) to histology-like images. Right now, our networks have been trained to generate auto-fluorescence, NF/MBP-stained images.
- Register your own mouse brain MRI data to the MRI dataset provided here and use the source codes to train your own networks.
- Register your own histological data to the MRI dataset provided here and use the source codes to train your own network. Note: co-register histological and MRI data is time consuming.
- Register data acquired using new MRI methods to the MRI dataset provided here and test its sensitivity and specificity using the histological data as the ground truth.
MRH-Net files layout
Fig. 2: File organization chart.
3. What are the limitations of MRH-Net?
- The current MRH-Nets were trained limited MRI and histological data from normal adult C57BL6 mouse brains (2 month old). Its performance for brains with pathology has not been evaluated. Additional training data (from different strains, ages, and pathology) will likely further improve the performance of MRH-Nets.
- MRH-Nets may not work for MRI data acquired using different MRI scanners or with different acquisition parameters. However, it is relatively easy to co-registered MRI data acquired from different MRI scanners and retrain the network.
- MRH-Nets can not be extended to clinical use. This is due to differences between mouse and human brain tissues and lack of histology ground truth from the human brain.
For more details on the project, please check out our paper :
"Inferring Maps of Cellular Structures from MRI Signals using Deep Learning" https://www.biorxiv.org/content/10.1101/2020.05.01.072561v1
Matlab version > 2019b
Deep learning toolbox. https://www.mathworks.com/products/deep-learning.html
nifti toolbox https://www.mathworks.com/matlabcentral/fileexchange/8797-tools-for-nifti-and-analyze-image
t-sne toolbox https://www.mathworks.com/help/stats/tsne.html
Running verified on
CUDADevice with properties:
Name: 'TITAN RTX' Index: 1 ComputeCapability: '7.5' SupportsDouble: 1 DriverVersion: 10.1000 ToolkitVersion: 10.1000 MaxThreadsPerBlock: 1024 MaxShmemPerBlock: 49152 MaxThreadBlockSize: [1024 1024 64] MaxGridSize: [2.1475e+09 65535 65535] SIMDWidth: 32 TotalMemory: 2.5770e+10 AvailableMemory: 1.7937e+10 MultiprocessorCount: 72 ClockRateKHz: 1770000 ComputeMode: 'Default' GPUOverlapsTransfers: 1 KernelExecutionTimeout: 1 CanMapHostMemory: 1 DeviceSupported: 1 DeviceSelected: 1
5.1 Data Preparation
MRH takes co-registered MRI and target histology for training and testing. You can find our 3D multi-contrast MRI and histology data in the Train_Data directory. Details on MRI data acquistion, source of histology data, and co-registration steps can be found in our manuscript (doi: https://doi.org/10.1101/2020.05.01.072561). If you plan to use our trained networks without modifications, it is important to use the same image acquisition protocols.
The figure below gives examples of coregistered MRI_histology. Here, we use landmarks to help evaluate the accuracy of registration.
5.2 Network Training
- Once co-registered MRI and target histological data are ready, use demo_trainingPrep.m in Matlab to prepare training samples for the next step.
For example, in training the MRH-AF network, the demo_trainingPrep.m file needs input at the following places.
work_folder = ['.\Train_Data\Subj\']; <--- directory of subject data used for training. halfpatch_size = 1; <--- the size of individual patch is 2*halfpatch_size+1. stride0 = 20; <--- distance in pixels between patches. Smaller stride0 means denser sampling.
fluo_img = load_untouch_nii(['.\Train_Data\Allen_Autofluo\AllenPathology2P60.img']); <--- location of target histological data. Here, we used the 3D AF data.
- demo_traingingPrep.m then call MRH_trainingPrep. In MRH_traingPrep.m, please specify data with different MR contrasts. In our experiments, we have multiple data files. The dwi2000 contains 30-direction diffusion MRI data acquired with a diffusion weighting (b-value) of 2,000 s/mm^2, the dwi5000 contains 30-direction diffusion MRI data acquired with a b-value of 5,000 s/mm^2, the t2MTONOFF contains T2-weighted, non-MT-weighted, and MT-weighted images, fractional anisotropy (FA) images calculated from dwi2000, and a mask image for the brain region.
dwi2000 = load_untouch_nii([folder_dwi,file_list(sample_img).name,'\rigidaffine_Lddm_dwi2000.img']); <--- dwi2000 data dwi5000 = load_untouch_nii([folder_dwi,file_list(sample_img).name,'\rigidaffine_Lddm_dwi5000.img']); <--- dwi5000 data t2MTONOFF = load_untouch_nii([folder_dwi,file_list(sample_img).name,'\rigidaffine_lddm_t2MTONOFF.img']); <--- t2/MTON/MTOFF data. fa_img = load_untouch_nii([folder_dwi,file_list(sample_img).name,'\rigidaffine_Lddm_fa.img']); <--- FA data. fa_mask = load_untouch_nii([folder_dwi,file_list(sample_img).name,'\Masked_outline.img']); <--- brain mask.
We have uploaded several MRI and histological datasets online, which can be downloaded for free. Please see README in /Train_Data
Once demo_trainingPrep.m finishes, the prepared training data will be saved in a .mat file. One example .mat file can be founded in folder /Train_Data
- The training samples will then be used to train a neural network using demo_training.m in Matlab. Please update the following parameters in the code.
load_mat =['.\traindata.mat']; <--- location of the training data generated by the previous procedure. input_channel = 67; <--- total number of image contrasts. %depth: 30 is used in the paper for auto-fluorescence training task, %as large amount data accessable from allen. % Shorter is preferred under the condition limited training data offered. % We have tested depth=3 on MR to myelin network training depth = 30; <--- depth of the network to be trained.
- The demo_training will then call MRH_training, which typically takes several hours on our system.
The trained network will be saved in .mat format. Our trained networks can be found in folder /network. One example of voxel-wise MRH_net using 5 ResBlocks is the following (USER can revise ResBlock length by depth in Code/demo_training.m/ depth = 30):
One exmple of a smoothed training curve from MR_AF is shwon below:
5.3 Network Testing
Testing data can be generated using demo_testingPrep in Matlab (it calls MRH_testingPrep.m). The network trained can be applied to the testing data using demo_testing in Matlab (it calls MRH_testing.mlx). Please update the following parameters:
load_data = '.\Test_Data\testdataPatch_mouse.mat'; <--- the testing data. load_net = '.\network\net_30layerV3Res_HRJG_allMRIs_Fluo.mat'; <--- the pre-trained network.
5.4 Mapping virtual histology
To reconstruct the whole brain or single slice virtual histology from single voxel data, please use MRH_recon.m.
- please redefine the slice parameters in the code according to your data size in the following location.
hei = 200; wid = 128; <--- hei for height & wid for width.
5.5 Transfer Learning
Transfer Learning may be used when only limited histological data are available and not sufficient to train the network without overfitting. In our experiment, we have 3D AF data from more than 1,000 mice, but only single mouse brain MBP and NF data. We experimented with transfer learning to adapt the MRH-AF network to MBP and NF data. The training and testing procedures are very similar to what have been described before, except using training layers and hyperparameters setting in /TransferLearning/MRH_training_transfer.mlx. For more details on transfer learning in Matlab, please see online resource at https://www.mathworks.com/help/deeplearning/ug/transfer-learning-using-pretrained-network.html
- Transfer learning needs one pre-trained network as a generic network:
net = MRH_training_Transfer(load_mat, networkDepth, pre_network); pre_network = ['.\network\net_30layerV3Res_HRJG_allMRIs_Fluo.mat']; <--- the pre_network is the MRH-AF network trained using 3D AF data.
- Below are settings used in our training
newLearnableLayer4 = convolution2dLayer(3,64,'Padding','same', ... 'Name','new_Conv4', ... 'WeightLearnRateFactor',10, ... 'BiasLearnRateFactor',10); newLearnableLayer5 = convolution2dLayer(3,1,'Padding','same', ... 'Name','new_Conv5', ... 'WeightLearnRateFactor',10, ... 'BiasLearnRateFactor',10); newfinallayer = regressionLayer('Name','FinalRegressionLayer') lgraph = replaceLayer(lgraph,'Conv4',newLearnableLayer4); lgraph = replaceLayer(lgraph,'Conv5',newLearnableLayer5); lgraph = replaceLayer(lgraph,'FinalRegressionLayer',newfinallayer) initLearningRate = 1e-4; learningRateFactor = 0.1;
MIT License Copyright (c) 2021 liangzifei Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Liang Z, Lee CH, Arefin TM, et al. Virtual mouse brain histology from multi-contrast MRI via deep learning. Elife. 2022;11:e72331. Published 2022 Jan 28. doi:10.7554/eLife.72331