diff --git a/docs/_toc.yml b/docs/_toc.yml index 943d294..02f93cc 100644 --- a/docs/_toc.yml +++ b/docs/_toc.yml @@ -9,8 +9,12 @@ - file: nipreps/community_development - part: tutorial chapters: + - file: head-motion/step0 - file: head-motion/intro - file: head-motion/data - file: head-motion/models - file: head-motion/registration - file: head-motion/solution +- part: extra + chapters: + - file: extra/nifti diff --git a/docs/assets/videos/dMRI-signal-movie.mp4 b/docs/assets/videos/dMRI-signal-movie.mp4 new file mode 100644 index 0000000..4da5df3 Binary files /dev/null and b/docs/assets/videos/dMRI-signal-movie.mp4 differ diff --git a/docs/assets/videos/hm-sagittal.avi b/docs/assets/videos/hm-sagittal.avi deleted file mode 100644 index 5222989..0000000 Binary files a/docs/assets/videos/hm-sagittal.avi and /dev/null differ diff --git a/docs/assets/videos/hm-sagittal.mp4 b/docs/assets/videos/hm-sagittal.mp4 new file mode 100644 index 0000000..1c2361f Binary files /dev/null and b/docs/assets/videos/hm-sagittal.mp4 differ diff --git a/docs/extra/nifti.md b/docs/extra/nifti.md new file mode 100644 index 0000000..3168571 --- /dev/null +++ b/docs/extra/nifti.md @@ -0,0 +1,282 @@ +--- +jupytext: + formats: md:myst + text_representation: + extension: .md + format_name: myst +kernelspec: + display_name: Python 3 + language: python + name: python3 +--- + +# The extra mile + +## Investigating NIfTI images with *NiBabel* + +[NiBabel](https://nipy.org/nibabel/) is a Python package for reading and writing neuroimaging data. +To learn more about how NiBabel handles NIfTIs, check out the [Working with NIfTI images](https://nipy.org/nibabel/nifti_images.html) page of the NiBabel documentation. + +```{code-cell} python +import nibabel as nib +``` + +First, use the `load()` function to create a NiBabel image object from a NIfTI file. +We'll load in an example dMRI file from the `data` folder. + +```{code-cell} python +dwi = "../../data/sub-01_dwi.nii.gz" + +dwi_img = nib.load(dwi) +``` + +Loading in a NIfTI file with `NiBabel` gives us a special type of data object which encodes all the information in the file. +Each bit of information is called an **attribute** in Python's terminology. +To see all of these attributes, type `dwi_img.` followed by Tab. +There are three main attributes that we'll discuss today: + +### 1. [Header](https://nipy.org/nibabel/nibabel_images.html#the-image-header): contains metadata about the image, such as image dimensions, data type, etc. + +```{code-cell} python +dwi_hdr = dwi_img.header +print(dwi_hdr) +``` + +### 2. Data + +As you've seen above, the header contains useful information that gives us information about the properties (metadata) associated with the dMRI data we've loaded in. +Now we'll move in to loading the actual *image data itself*. +We can achieve this by using the `get_fdata()` method. + +```{code-cell} python +:tags: [output_scroll] + +dwi_data = dwi_img.get_fdata() +dwi_data +``` + +What type of data is this exactly? We can determine this by calling the `type()` function on `dwi_data`. + +```{code-cell} python +type(dwi_data) +``` + +The data is a multidimensional **array** representing the image data. + +How many dimensions are in the `dwi_data` array? + +```{code-cell} python +dwi_data.ndim +``` + +As expected, the data contains 4 dimensions (*i, j, k* and gradient number). + +How big is each dimension? + +```{code-cell} python +dwi_data.shape +``` + +This tells us that the image is 128, 128, 66 + +Lets plot the first 10 volumes. + +```{code-cell} python +:tags: [output_scroll] + +%matplotlib inline + +from nilearn import image +from nilearn.plotting import plot_epi + +selected_volumes = image.index_img(dwi, slice(0, 10)) + +for img in image.iter_img(selected_volumes): + plot_epi(img, display_mode="z", cut_coords=(30, 53, 75), cmap="gray") +``` + +### 3. [Affine](https://nipy.org/nibabel/coordinate_systems.html): tells the position of the image array data in a reference space + +The final important piece of metadata associated with an image file is the **affine matrix**. +Below is the affine matrix for our data. + +```{code-cell} python +dwi_affine = dwi_img.affine +dwi_affine +``` + +To explain this concept, recall that we referred to coordinates in our data as *(i,j,k)* coordinates such that: + +* i is the first dimension of `dwi_data` +* j is the second dimension of `dwi_data` +* k is the third dimension of `dwi_data` + +Although this tells us how to access our data in terms of voxels in a 3D volume, it doesn't tell us much about the actual dimensions in our data (centimetres, right or left, up or down, back or front). +The affine matrix allows us to translate between *voxel coordinates* in (i,j,k) and *world space coordinates* in (left/right,bottom/top,back/front). +An important thing to note is that in reality in which order you have: + +* left/right +* bottom/top +* back/front + +Depends on how you've constructed the affine matrix, but for the data we're dealing with it always refers to: + +* Right +* Anterior +* Superior + +Applying the affine matrix is done through using a *linear map* (matrix multiplication) on voxel coordinates (defined in `dwi_data`). + +## Diffusion gradient schemes + +In addition to the acquired diffusion images, two files are collected as part of the diffusion dataset. +These files correspond to the gradient amplitude (b-values) and directions (b-vectors) of the diffusion measurement and are named with the extensions `.bval` and `.bvec` respectively. + +```{code-cell} python +bvec = "../../data/sub-01_dwi.bvec" +bval = "../../data/sub-01_dwi.bval" +``` + +The b-value is the diffusion-sensitizing factor, and reflects both the timing & strength of the gradients (measured in s/mm^2) used to acquire the diffusion-weighted images. + +```{code-cell} python +!cat ../../data/sub-01_dwi.bval +``` + +The b-vector corresponds to the direction of the diffusion sensitivity. Each row corresponds to a value in the i, j, or k axis. The numbers are combined column-wise to get an [i j k] coordinate per DWI volume. + +```{code-cell} python +!cat ../../data/sub-01_dwi.bvec +``` + +Together these two files define the dMRI measurement as a set of gradient directions and corresponding amplitudes. + +In our example data, we see that 2 b-values were chosen for this scanning sequence. +The first few images were acquired with a b-value of 0 and are typically referred to as b=0 images. +In these images, no diffusion gradient is applied. +These images don't hold any diffusion information and are used as a reference (for head motion correction or brain masking) since they aren't subject to the same types of scanner artifacts that affect diffusion-weighted images. + +All of the remaining images have a b-value of 1000 and have a diffusion gradient associated with them. +Diffusion that exhibits directionality in the same direction as the gradient results in a loss of signal. +With further processing, the acquired images can provide measurements which are related to the microscopic changes and estimate white matter trajectories. + + + +We'll use some functions from [Dipy](https://dipy.org), a Python package for pre-processing and analyzing diffusion data. +After reading the `.bval` and `.bvec` files with the `read_bvals_bvecs()` function, we get both in a numpy array. Notice that the `.bvec` file has been transposed so that the i, j, and k-components are in column format. + +```{code-cell} python +from dipy.io import read_bvals_bvecs + +gt_bvals, gt_bvecs = read_bvals_bvecs(bval, bvec) +gt_bvecs +``` + +Below is a plot of all of the diffusion directions that we've acquired. + +```{code-cell} python +import matplotlib.pyplot as plt + +fig = plt.figure() +ax = fig.add_subplot(111, projection="3d") +ax.scatter(gt_bvecs.T[0], gt_bvecs.T[1], gt_bvecs.T[2]) +plt.show() +``` + +It is important to note that in this format, the diffusion gradients are provided with respect to the image axes, not in real or scanner coordinates. Simply reformatting the image from sagittal to axial will effectively rotate the b-vectors, since this operation changes the image axes. Thus, a particular bvals/bvecs pair is only valid for the particular image that it corresponds to. + +The diffusion gradient is critical for later analyzing the data + +```{code-cell} python +:tags: [output_scroll] + +import numpy as np + +if dwi_affine.shape == (4, 4): + dwi_affine = dwi_affine[:3, :3] + +rotated_bvecs = dwi_affine[np.newaxis, ...].dot(gt_bvecs.T)[0].T +rotated_bvecs +``` + +```{code-cell} python + +fig = plt.figure() +ax = fig.add_subplot(111, projection="3d") +ax.scatter(rotated_bvecs.T[0], rotated_bvecs.T[1], rotated_bvecs.T[2]) +plt.show() +``` + +Inspired by MRtrix3 and proposed in the [BIDS spec](https://github.com/bids-standard/bids-specification/issues/349), dMRIPrep also creates an optional `.tsv` file where the diffusion gradients are reported in scanner coordinates as opposed to image coordinates. +The [i j k] values reported earlier are recalculated in [R A S]. + +```{code-cell} python +:tags: [output_scroll] + +rasb = np.c_[rotated_bvecs, gt_bvals] + +rasb +``` + +We can write out this `.tsv` to a file. + +```{code-cell} python +np.savetxt(fname="../../data/sub-01_rasb.tsv", delimiter="\t", X=rasb) +``` + +```{code-cell} python +from dipy.core.gradients import gradient_table + +gtab = gradient_table(gt_bvals, rotated_bvecs) +``` + +## Brain Masking + +One of the first things we do before image registration is brain extraction, separating any non-brain material from brain tissue. +This is done so that our algorithms aren't biased or distracted by whatever is in non-brain material and we don't spend extra time analyzing things we don't care about + +As mentioned before, the b0s are a good reference scan for doing brain masking. lets index them. + +```{code-cell} python +gtab.b0s_mask +``` + +```{code-cell} python +bzero = dwi_data[:, :, :, gtab.b0s_mask] +``` + +skullstrip +5 volumes in b0 + +```{code-cell} python +bzero.shape +``` + +take median image + +```{code-cell} python + +median_bzero = np.median(bzero, axis=-1) +``` + +```{code-cell} python +from dipy.segment.mask import median_otsu + +b0_mask, mask = median_otsu(median_bzero, median_radius=2, numpass=1) +``` + +```{code-cell} python +from dipy.core.histeq import histeq + +sli = median_bzero.shape[2] // 2 +plt.subplot(1, 3, 1).set_axis_off() +plt.imshow(histeq(median_bzero[:, :, sli].astype("float")).T, + cmap="gray", origin="lower") + +plt.subplot(1, 3, 2).set_axis_off() +plt.imshow(mask[:, :, sli].astype("float").T, cmap="gray", origin="lower") + +plt.subplot(1, 3, 3).set_axis_off() +plt.imshow(histeq(b0_mask[:, :, sli].astype("float")).T, + cmap="gray", origin="lower") +``` diff --git a/docs/head-motion/data.md b/docs/head-motion/data.md index fb2ee1d..9254a50 100644 --- a/docs/head-motion/data.md +++ b/docs/head-motion/data.md @@ -10,211 +10,85 @@ kernelspec: name: python3 --- -# Introduction to dMRI Data +# Introduction to dMRI data ```{code-cell} python :tags: [hide-cell] import warnings warnings.filterwarnings("ignore") -``` - -Diffusion imaging probes the random, microscopic motion of water protons by employing MRI sequences which are sensitive to the geometry and environmental organization surrounding the water protons. -This is a popular technique for studying the white matter of the brain. -The diffusion within biological structures, such as the brain, are often restricted due to barriers (eg. cell membranes), resulting in a preferred direction of diffusion (anisotropy). -A typical dMRI scan will acquire multiple volumes that are sensitive to a particular diffusion direction. - -## Diffusion Gradient Schemes - -In addition to the acquired diffusion images, two files are collected as part of the diffusion dataset. -These files correspond to the gradient amplitude (b-values) and directions (b-vectors) of the diffusion measurement and are named with the extensions `.bval` and `.bvec` respectively. - -```{code-cell} python -dwi = "../../data/sub-01_dwi.nii.gz" -bvec = "../../data/sub-01_dwi.bvec" -bval = "../../data/sub-01_dwi.bval" -``` -The b-value is the diffusion-sensitizing factor, and reflects the timing & strength of the gradients (measured in s/mm2) used to acquire the diffusion-weighted images. -```{code-cell} python -!cat ../../data/sub-01_dwi.bval -``` - -The b-vector corresponds to the direction of the diffusion sensitivity. Each row corresponds to a value in the x, y, or z axis. The numbers are combined column-wise to get an [x y z] coordinate per DWI volume. - -```{code-cell} python -!cat ../../data/sub-01_dwi.bvec -``` - -Together these two files define the dMRI measurement as a set of gradient directions and corresponding amplitudes. - -In the example data above, we see that 2 b-values were chosen for this scanning sequence. -The first few images were acquired with a b-value of 0 and are typically referred to as b=0 images. -In these images, no diffusion gradient is applied. -These images don't hold any diffusion information and are used as a reference (head motion correction) since they aren't subject to the same types of scanner artifacts that affect diffusion-weighted images. - -All of the remaining images have a b-value of 1000 and have a diffusion gradient associated with them. -Diffusion that exhibits directionality in the same direction as the gradient result in a loss of signal. -With further processing, the acquired images can provide measurements which are related to the microscopic changes and estimate white matter trajectories. - -```{code-cell} python -%matplotlib inline - -from nilearn import image -from nilearn.plotting import plot_epi - -selected_volumes = image.index_img(dwi, slice(3, 7)) - -for img in image.iter_img(selected_volumes): - plot_epi(img, display_mode="z", cut_coords=(30, 53, 75), cmap="gray") -``` - -After reading the `.bval` and `.bvec` files with the `read_bvals_bvecs()` function, we get both in a numpy array. Notice that the `.bvec` file has been transposed so that the x, y, and z-components are in column format. - -```{code-cell} python -from dipy.io import read_bvals_bvecs +def _data_repr(value): + if value is None: + return "None" + return f"<{'x'.join(str(v) for v in value.shape)} ({value.dtype})>" -gt_bvals, gt_bvecs = read_bvals_bvecs(bval, bvec) -gt_bvecs ``` -```{code-cell} python -import matplotlib.pyplot as plt +Diffusion imaging probes the random, microscopic motion of water protons by using MRI sequences that are sensitive to the geometry and environmental organization surrounding these protons. +This is a popular technique for studying the white matter of the brain. +The diffusion within biological structures, such as the brain, are often restricted due to barriers (eg. cell membranes), resulting in a preferred direction of diffusion (anisotropy). +A typical dMRI scan will acquire multiple volumes (or ***angular samples***), each sensitive to a particular ***diffusion direction***. +These *diffusion directions* (or ***orientations***) are a fundamental piece of metadata to interpret dMRI data, as models need to know the exact orientation of each angular sample. -fig = plt.figure() -ax = fig.add_subplot(111, projection="3d") -ax.scatter(gt_bvecs.T[0], gt_bvecs.T[1], gt_bvecs.T[2]) -plt.show() +```{admonition} Main elements of a dMRI dataset +- A 4D data array, where the last dimension encodes the reconstructed **diffusion direction *maps***. +- Tabular data or a 2D array, listing the **diffusion directions** and the encoding gradient strength. ``` -It is important to note that in this format, the diffusion gradients are provided with respect to the image axes, not in real or scanner coordinates. Simply reformatting the image from sagittal to axial will effectively rotate the b-vectors, since this operation changes the image axes. Thus, a particular bvals/bvecs pair is only valid for the particular image that it corresponds to. - -## Diffusion Gradient Operations - -Because the diffusion gradient is critical for later analyzing the data, dMRIPrep performs several checks to ensure that the information is stored correctly. -### BIDS Validator - -At the beginning of the pipeline, the BIDS Validator is run. -This package ensures that the data is BIDS-compliant and also has several dMRI-specific checks summarized below: - -- all dMRI scans have a corresponding `.bvec` and `.bval` file -- the files aren't empty and formatted correctly - - single space delimited - - only contain numeric values - - correct number of rows and volume information - - volume information matches between image, `.bvec` and `.bval` files +In summary, dMRI involves ***complex data types*** that, as programmers, we want to access, query and manipulate with ease. -### DiffusionGradientTable +## Python and object oriented programming -In dMRIPrep, the `DiffusionGradientTable` class is used to read in the `.bvec` and `.bval` files, perform further sanity checks and make any corrections if needed. - -```{code-cell} python -from dmriprep.utils.vectors import DiffusionGradientTable - -dwi = "../../data/sub-02_dwi.nii.gz" -bvec = "../../data/sub-02_dwi.bvec" -bval = "../../data/sub-02_dwi.bval" - -gt_bvals, gt_bvecs = read_bvals_bvecs(bval, bvec) - -gtab = DiffusionGradientTable(dwi_file=dwi, bvecs=bvec, bvals=bval) -``` +Python is an [object oriented programming](https://en.wikipedia.org/wiki/Object-oriented_programming) language, which represent and encapsulate data types and corresponding behaviors into programming structures called *objects*. -Below is a comparison of the `.bvec` and `.bval` files as read originally using `dipy` and after being corrected using `DiffusionGradientTable`. +Therefore, let's leverage Python to create *objects* that contain dMRI data. +In Python, *objects* can be specified by defining a class with name `DWI`. +To simplify class creation, we'll use the magic of a Python library called [`attrs`](https://www.attrs.org/en/stable/). ```{code-cell} python -gt_bvals -``` +"""Representing data in hard-disk and memory.""" +import attr -It looks like this data has 5 unique b-values: 0, 600, 900, 1200 and 1800. -However, the actual values that are reported look slightly different. -```{code-cell} python -from collections import Counter -Counter(sorted(gt_bvals)) -``` +@attr.s(slots=True) +class DWI: + """Data representation structure for dMRI data.""" -dMRIPrep does a bit of rounding internally to cluster the b-values into shells. + dataobj = attr.ib(default=None, repr=_data_repr) + """A numpy ndarray object for the data array, without *b=0* volumes.""" + brainmask = attr.ib(default=None, repr=_data_repr) + """A boolean ndarray object containing a corresponding brainmask.""" + bzero = attr.ib(default=None, repr=_data_repr) + """A *b=0* reference map, preferably obtained by some smart averaging.""" + gradients = attr.ib(default=None, repr=_data_repr) + """A 2D numpy array of the gradient table in RAS+B format.""" + em_affines = attr.ib(default=None) + """ + List of :obj:`nitransforms.linear.Affine` objects that bring + DWIs (i.e., no b=0) into alignment. + """ -```{code-cell} python -gtab.bvals -``` + def __len__(self): + """Obtain the number of high-*b* orientations.""" + return self.gradients.shape[-1] -```{code-cell} python -gt_bvecs[0:20] ``` -It also replaces the b-vecs where a b-value of 0 is expected. +This first code implements several *attributes* and the first *behavior* - the `__len__` *method*. +The `__len__` method is special in Python, as it will be executed when we call the built-in function `len()` on our object. +Let's test this memory structure with some *simulated* data: ```{code-cell} python -gtab.bvecs[0:20] -``` +# NumPy is a fundamental Python library +import numpy as np -Inspired by MRtrix3 and proposed in the [BIDS spec](https://github.com/bids-standard/bids-specification/issues/349), dMRIPrep also creates an optional `.tsv` file where the diffusion gradients are reported in scanner coordinates as opposed to image coordinates. -The [x y z] values reported earlier are recalculated in [R A S]. +# Let's create a new DWI object, with only gradient information that is random +dmri_dataset = DWI(gradients=np.random.normal(size=(4, 109))) -```{code-cell} python -gtab.gradients[0:20] +# Let's call Python's built-in len() function +print(len(dmri_dataset)) ``` -Why is this important? - -Below is an example of how improperly encoded bvecs can affect tractography. -![incorrect_bvecs](../images/incorrect_bvecs.png) - -`MRtrix3` has actually created a handy tool called `dwigradcheck` to confirm whether the diffusion gradient table is oriented correctly. - -``` -$ dwigradcheck -fslgrad ../../data/sub-02_dwi.bvec ../../data/sub-02_dwi.bval ../../data/sub-02_dwi.nii.gz - -> Mean length Axis flipped Axis permutations Axis basis -52.41 none (0, 1, 2) image -51.68 none (0, 1, 2) scanner -32.70 1 (0, 1, 2) image -32.25 1 (0, 1, 2) scanner -31.23 0 (0, 2, 1) scanner -30.97 2 (0, 1, 2) scanner -30.82 0 (0, 2, 1) image -29.41 2 (0, 1, 2) image -29.31 none (0, 2, 1) image -28.61 none (1, 0, 2) image -28.57 2 (1, 0, 2) scanner -28.46 none (0, 2, 1) scanner -28.41 none (2, 1, 0) scanner -28.40 none (1, 0, 2) scanner -28.14 0 (0, 1, 2) scanner -28.04 none (2, 1, 0) image -27.92 1 (2, 1, 0) image -27.80 1 (2, 1, 0) scanner -27.71 2 (1, 0, 2) image -27.54 0 (0, 1, 2) image -23.43 1 (0, 2, 1) image -22.86 1 (0, 2, 1) scanner -21.55 2 (0, 2, 1) scanner -21.44 0 (1, 2, 0) scanner -21.35 2 (0, 2, 1) image -21.03 1 (1, 0, 2) image -20.88 0 (1, 0, 2) image -20.87 1 (1, 2, 0) image -20.80 0 (2, 0, 1) scanner -20.74 0 (1, 0, 2) scanner -20.41 2 (2, 0, 1) scanner -20.38 1 (1, 0, 2) scanner -20.25 0 (2, 1, 0) image -20.24 0 (1, 2, 0) image -20.21 1 (1, 2, 0) scanner -20.15 1 (2, 0, 1) image -20.13 2 (1, 2, 0) scanner -20.11 2 (2, 0, 1) image -20.04 1 (2, 0, 1) scanner -19.94 0 (2, 0, 1) image -19.87 none (2, 0, 1) scanner -19.86 none (2, 0, 1) image -19.83 2 (2, 1, 0) scanner -19.72 2 (1, 2, 0) image -19.59 none (1, 2, 0) image -19.49 0 (2, 1, 0) scanner -19.45 2 (2, 1, 0) image -19.43 none (1, 2, 0) scanner -``` +For simplicity, we will be using the full implementation from our [`emc` (EddyMotionCorrection) package](https://github.com/nipreps/EddyMotionCorrection/blob/57c518929146b23cc9534ab0b2d024aa136e25f8/emc/dmri.py) diff --git a/docs/head-motion/intro.md b/docs/head-motion/intro.md index 5b8d44f..cbae09c 100644 --- a/docs/head-motion/intro.md +++ b/docs/head-motion/intro.md @@ -3,10 +3,10 @@ A recurring problem for any MRI acquisition is that image reconstruction and modeling are extremely sensitive to very small changes in the position of the imaged object. Rigid-body, bulk-motion of the head will degrade every image, even if the experimenters closely followed all the standard operation procedures and carefully prepared the experiment (e.g., setting correctly the head paddings), and even if the participant was experienced with the MR settings and strictly followed indications to avoid any movement outside time windows allocated for rest. This effect is exacerbated by the length of the acquisition (longer acquisitions will have more motion), and is not limited to humans. -For instance, although rats are typically accquired with head fixations and under sedation, their breathing (especially when assisted) generally causes motion. +For instance, although rats are typically acquired with head fixations and under sedation, their breathing (especially when assisted) generally causes motion. Even the vibration of the scanner itself can introduce motion! - + ## Dimensions of the head-motion problem @@ -20,4 +20,4 @@ While we can address the misalignment, it is really problematic to overcome the ## Objective: Implement a head-motion estimation code This tutorial focuses on the misalignment problem. -We will build from existing software (DIPY, for diffusion modeling) and ANTs (for image registration), as well as commonplace Python libraries (NumPy) a software framework for head-motion estimation in diffusion MRI data. \ No newline at end of file +We will build from existing software (Dipy, for diffusion modeling) and ANTs (for image registration), as well as commonplace Python libraries (NumPy) a software framework for head-motion estimation in diffusion MRI data. diff --git a/docs/step0.md b/docs/head-motion/step0.md similarity index 95% rename from docs/step0.md rename to docs/head-motion/step0.md index 09afbb7..7f00783 100644 --- a/docs/step0.md +++ b/docs/head-motion/step0.md @@ -29,4 +29,4 @@ jupyter lab ``` -## Local installation ("docker containers") +## Local installation ("docker containers") diff --git a/docs/images/incorrect_bvecs.png b/docs/images/incorrect_bvecs.png deleted file mode 100644 index fde9de1..0000000 Binary files a/docs/images/incorrect_bvecs.png and /dev/null differ diff --git a/docs/images/nipreps-chart.png b/docs/images/nipreps-chart.png new file mode 100644 index 0000000..8f2fd0c Binary files /dev/null and b/docs/images/nipreps-chart.png differ diff --git a/docs/images/nipreps-chart.svg b/docs/images/nipreps-chart.svg deleted file mode 100644 index 77b9e6f..0000000 --- a/docs/images/nipreps-chart.svg +++ /dev/null @@ -1,6123 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - image/svg+xml - - - - - - - - - - - sMRIPrep - Structural MRI "Prep" - The NiPrep for structural (T1-weighted, and T2w)MRI scans, typically used in morphometry analysisand acquired for spatial reference of f/dMRI. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Anatomical preprocessing - - - Functional preprocessing - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Fuse & Conform - - - All T1w images are aligned and - averaged to form a 3D reference - image - - - - - - Skull-stripping - - - Atlas-based brain extraction - of the reference T1w image - - - - - - - - - - - - - - - - - - - - - T1-weighted - - - One or more T1w - images - - - T2-weighted - - - (Optional) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - time - - - TR - - - BOLD run - - - Time-series of blood-oxygen - level (BOLD) measurements - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Generate reference & brain mask - - - Specialized averaging of the time-series into one - volume and separate brain from background - - - Estimation of head-motion - - - Calculate rigid-body motion parameters that - minimize motion between time-steps - - - Slice-timing correction - - - (Optional) Infer the signal that would have been - measured if all slices were acquired simultaneously - - - Template - - - Average of - 152 subjects, - MNI space - - - INU Correction - - - Estimate and correct for - intensity nonuniformity (INU) - - - Spatial - normalization - - - Non-linear, spatial - alignment to the - brain template - - - Brain tissue - segmentation - - - Voxel-wise - classification on - tissues - - - Surface - reconstruction - - - Inner and outer - surfaces of the cortical - sheet are identified - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Alignment to T1w - reference - - - Spatial mapping between - BOLD and T1w -  coordinates - - - Susceptibility - distortion estimation - - - (Optional) Find a deformation - field to unwarp the BOLD series - - - - - - - - - Preprocessed - (native) - - - Resampling of - the BOLD series - in their original - space - - - Preprocessed - (other) - - - Resampling of - the BOLD series - in other target - spaces - - - - - - Preprocessed - (surface) - - - Resampling of the - BOLD series on the - cortical surfaces - - - Confounds - - - Collect/calculate nuisance series (e.g. motion - parameters, global signals, etc.) - - - - - - - dMRIPrep - Diffusion MRI "Prep" - The NiPrep for diffusion MRI scans, typically usedin analysis of tractography and white-mattermicrostructure. - - - - - -   - Image processing methods for human brain connectivity analysis from in-vivo diffusion MRI - Óscar Esteban Sanz-Dranguet - - - - - - fMRIPrep - Functional MRI "Prep" - The first NiPrep for functional MRI data. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - L - - - - - - - - - - - - - - - - - R - - - - - - - - - - - - - - - - - z=0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - L - - - - - - - - - - - - - - - - - R - - - - - - - - - - - - - - - - - z=15 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - L - - - - - - - - - - - - - - - - - R - - - - - - - - - - - - - - - - - z=30 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 5cm - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 5cm - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 5cm - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 80% - - - - - - - - - - - - - - - - - - - - - 25% - - - - - - - - - - - - - - - - - - - - - 25% - - - - - - - - - - - - - - - - - - - - - 80% - - - - - - - - - - - - - - - - - - - - - - - - - - Coldhuesrepresentnegativeactivation(responseinhibition).Warmhuesrepresentpositiveactivation.ActivationcountmapsarederivedfromN=257biologicallyindependentparticipants. - - - - - - - - - - - - - - Fractionofparticipants - withsignificantresponse - - - - - - - - - - - - - - fMRIPrep - - - - - - - - - - - - - - feat - - - - - - - - - - - MRIQC - Quality Control (QC) of MRI(structural and functional) - MRIQC produces visual reports for the efficientscreening for quality of MRI data, and estimatesquality metrics to learn machines flag subpar data. - - - - - - - - - - NiWorkflows - Neuroimagingprocessing commodities - Miscellaneous utilities for visualization, reporting,workflow management and fundamental workflowswith functionality shared across applications. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - SDCFlows - Susceptibility distortioncorrection workflows - Library of tools for estimating and correctingfor susceptibility-derived distortions typicallyaffecting EPI images (d/fMRI). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - CrowdMRI - A database of crowdsourcedneuroimaging features - An Internet Service endpoint to collect statisticsabout data quality, extensible to any crowdsourcingapplication. - - - - - - - - MRIQCnets - Machine learning for QC and protocol assessment - Deep Learning models to support higher-levelapplication, such as the deep-MRIQC, the EulerNumber Predictor, or the FaceDetector. - - - - - - - - - TemplateFlow - Access, interface, andflow across templates - An archive and a client tool to allow pro-grammatic access to neuroimaging templatesand atlases, by humans and machines. - - - - NiTransforms - I/O and manipulation ofspatial transforms - Library supporting the spatial transformationdata formats, and an easy-to-use interfacefor their application and manipulation. - - - - - - - - - - - - - - - - - - NiPype - Neuroimaging workflowsand interfaces in Python - The workflow engine supporting the execution graph and run time management (staging tasks, communication, data flow). - - - - - - - BIDS + PyBIDS - Brain Imaging DataStructure - The data specification prescribing the formalstructure for the neuroimaging data inputsand outputs. - - - - - - - SoftwareInfrastructure - Middlewareutilities - End-userapplications - - NiBabel - Access a cacophony ofneuroimaging file formats - Library supporting the neuroimaging dataformats (e.g., NIfTI, GIFTI, and CIFTI2). - - - - - diff --git a/docs/nipreps/nipreps.md b/docs/nipreps/nipreps.md index a38c148..b8f0204 100644 --- a/docs/nipreps/nipreps.md +++ b/docs/nipreps/nipreps.md @@ -61,7 +61,7 @@ They can be organized into 3 layers: - Middleware: contains functions that generalize across the end-user tools - End-user tools: perform pre-processing or quality control -```{figure} ../images/nipreps-chart.svg +```{figure} ../images/nipreps-chart.png :name: nipreps_chart ``` @@ -83,7 +83,7 @@ This eases the burden of maintaining these tools but also helps focus on standar *NiPreps* only support BIDS-Derivatives as output and so are agnostic to subsequent analysis. *NiPreps* also aim to be robust in their codebase. -The pipelines are modular and rely on widely-used tools such as AFNI, ANTs, FreeSurfer, FSL, NiLearn, or DIPY and are extensible via plug-ins. +The pipelines are modular and rely on widely-used tools such as AFNI, ANTs, FreeSurfer, FSL, Nilearn, or Dipy and are extensible via plug-ins. This modularity in the code allows each step to be thoroughly tested. Some examples of tests performed on different parts of the pipeline are shown below: ```{tabbed} unittest diff --git a/docs/welcome.md b/docs/welcome.md index 0713fdf..6983b2f 100644 --- a/docs/welcome.md +++ b/docs/welcome.md @@ -3,15 +3,20 @@ ## *Implementing a head-motion correction algorithm for diffusion MRI in Python, using Dipy and NiTransforms* **Summary**. -This tutorial walks attendees through the development of one fundamental step in the processing of diffusion MRI data using a community-driven approach and relying on existing tools. -The tutorial first justifies the *NiPreps* approach to preprocessing, describing how the framework attempts to enhance or extend the scanning device to produce "analysis-grade" data. -This is important because data produced by the scanner is typically not digestible by statistical analysis directly. -Researchers resort to either 1) modifying their experimental design so that it matches the requirements of large-scale studies that have made publicly available all their software tooling or 2) creating custom preprocessing pipelines tailored to each particular study. -This tutorial has been designed to engage signal processing engineers and imaging researchers in the NiPreps community, demonstrating how to fill the gaps of their preprocessing needs regardless of their field. +This tutorial walks attendees through the development of one fundamental step in the pre-processing of diffusion MRI data using a community-driven approach and relying on existing tools. +The tutorial first justifies the *NiPreps* approach to pre-processing, describing how the framework attempts to enhance or extend the MRI scanner to produce "analysis-grade" data. +This is important because data produced by the scanner is typically not digestible for statistical analysis directly. + +Researchers resort to either: + +1. modifying their experimental design so that it matches the requirements of large-scale studies that have made all of their software tools publicly available +1. creating custom pre-processing pipelines tailored to each particular study + +This tutorial has been designed to engage signal processing engineers and imaging researchers in the *NiPreps* community, demonstrating how to fill in the gaps of their pre-processing needs regardless of their field. ```{admonition} Objectives - Learn how to contribute to "open source" software - Get a tour of the *NiPreps* framework - Understand the basics of dMRI data and pre-processing - Discover how to integrate some of the tools in the *NiPreps* framework -``` \ No newline at end of file +``` diff --git a/requirements.txt b/requirements.txt index da014c6..c2de8f6 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,13 @@ -numpy +attr dipy -nitransforms +git+https://github.com/nipreps/EddyMotionCorrection.git@main ghp-import jupyter-book jupytext +matplotlib +nibabel +nilearn +nitransforms +niworkflows +numpy sphinx-exercise \ No newline at end of file