Skip to content

Commit

Permalink
fix: typos and minimal improvements here and there
Browse files Browse the repository at this point in the history
  • Loading branch information
oesteban committed Apr 11, 2021
1 parent d17b8e5 commit c2f9ec5
Show file tree
Hide file tree
Showing 4 changed files with 31 additions and 22 deletions.
30 changes: 19 additions & 11 deletions docs/nipreps/nipreps.md
Expand Up @@ -9,7 +9,7 @@ Different kinds of artifacts can occur during a scan due to:
- breathing, heart beating, blood vessels
- metal items
- scanner hardware limitations
- distortions due to B0 and B1 inhomogeneities
- distortions due to *B<sub>0</sub>* and *B<sub>1</sub>* inhomogeneities
- eddy currents
- signal drift
- signal processing
Expand All @@ -18,12 +18,13 @@ Different kinds of artifacts can occur during a scan due to:
These physiological and acquisition artifacts can confound the interpretation of our analysis results.
Thus, pre-processing is necessary to minimize their influence.

Pre-processing can also help prepare the data for analysis in other ways. Some examples include:
Pre-processing can also help prepare the data for analysis in other ways.
Some examples include:

- image registration between acquisitions (e.g., sessions, runs, modalities, etc.)
- image registration to normalized spaces
- image registration to standard spaces
- identifying spurious sources of signal
- automated segmentation (eg. brain masking, tissue classification)
- automated segmentation (e.g., brain masking, tissue classification)

## The problem of methodological variability

Expand All @@ -32,7 +33,7 @@ The complexity of these workflows has snowballed with rapid advances in acquisit

In Botvinik et al., 2020 [^botvinik2020], 70 independent teams were tasked with analyzing the same fMRI dataset and testing 9 hypotheses.
The study demonstrated the huge amount of variability in analytic approaches as *no two teams* chose identical workflows.
One encouraging finding was that 48% of teams chose to pre-process the data using fMRIPrep [^esteban2019], a standardized pipeline for fMRI data.
One encouraging finding was that 48% of teams chose to pre-process the data using *fMRIPrep* [^esteban2019], a standardized pipeline for fMRI data.

A similar predicament exists in the field of dMRI analysis.
There has been a lot of effort in recent years to compare the influence of various pre-processing steps on tractography and structural connectivity [^oldham2020] [^schilling2019] and harmonize different datasets [^tax2019].
Expand All @@ -50,15 +51,20 @@ All of this points to a need for creating standardized pipelines for pre-process
:align: right
```

*NiPreps* are a collection of tools that work as an extension of the scanner in that they **minimally pre-process** the data and make them **"safe to consume"** for analysis - kinda like *sashimi*!
*NiPreps* are a collection of tools that work as an extension of the scanner produce "*analysis-grade*" data.
By *analysis-grade* we mean something like *sushi-grade fish*:
*NiPreps* produce ***minimally preprocessed*** data that nonetheless are **"*safe to consume*"** (meaning, ready for modeling and statistical analysis).
From the reversed perspective, *NiPreps* are designed to be ***agnostic to downstream analysis***.
This means that *NiPreps* are carefully designed not to limit the potential analyses that can be performed on the preprocessed data.
For instance, because spatial smoothing is a processing step tightly linked with the assumptions of your statistical model, *fMRIPrep* does not perform any spatial smoothing step.

Below is a depiction of the projects currently maintained by the NiPreps community.
Below is a depiction of the projects currently maintained by the *NiPreps community*.
These tools arose out of the need to extend *fMRIPrep* to new imaging modalities and populations.

They can be organized into 3 layers:

- Software infrastructure: deliver low-level interfaces and utilities
- Middleware: contains functions that generalize across the end-user tools
- Middleware: contain functions that generalize across the end-user tools
- End-user tools: perform pre-processing or quality control

```{figure} ../images/nipreps-chart.png
Expand All @@ -68,10 +74,10 @@ They can be organized into 3 layers:

## NiPreps driving principles

*NiPreps* are driven by 3 main principles, which are summarized below.
*NiPreps* are driven by three main principles, which are summarized below.
These principles distill some design and organizational foundations.

### 1. Robust
### 1. Robust with very diverse data

*NiPreps* are meant to be robust to different datasets and attempt to provide the best possible results independent of scanner manufacturer, acquisition parameters, or the presence of additional correction scans (such as field maps).
The end-user tools only impose a single constraint on the input dataset - being compliant with BIDS (Brain Imaging Data Structure) [^gorgolewski2016].
Expand All @@ -80,7 +86,7 @@ This minimizes human intervention in running the pipelines as they are able to a

The scope of these tools is strictly limited to pre-processing tasks.
This eases the burden of maintaining these tools but also helps focus on standardizing each processing step and reducing the amount of methodological variability.
*NiPreps* only support BIDS-Derivatives as output and so are agnostic to subsequent analysis.
*NiPreps* only support BIDS-Derivatives as output.

*NiPreps* also aim to be robust in their codebase.
The pipelines are modular and rely on widely-used tools such as AFNI, ANTs, FreeSurfer, FSL, Nilearn, or DIPY and are extensible via plug-ins.
Expand Down Expand Up @@ -139,6 +145,8 @@ The success of these tools has largely been driven by their strong uptake in the
This has allowed them to be exercised on diverse datasets and has brought the interest of a variety of domain experts to contribute their knowledge towards improving the tools.
The tools are "open source" and all of the code and ideas are visible on GitHub.

### References

[^botvinik2020]: Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature 582, 84–88 (2020). doi: 10.1038/s41586-020-2314-9

[^esteban2019]: Esteban, O., Markiewicz, C.J., Blair, R.W. et al. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat Methods 16, 111–116 (2019). doi: 10.1038/s41592-018-0235-4
Expand Down
13 changes: 7 additions & 6 deletions docs/tutorial/data.md
Expand Up @@ -23,7 +23,7 @@ warnings.filterwarnings("ignore")

Diffusion imaging probes the random, microscopic movement of water molecules by using MRI sequences that are sensitive to the geometry and environmental organization surrounding these protons.
This is a popular technique for studying the white matter of the brain.
The diffusion within biological structures, such as the brain, are often restricted due to barriers (eg. cell membranes), resulting in a preferred direction of diffusion (anisotropy).
The diffusion within biological structures, such as the brain, are often restricted due to barriers (e.g., cell membranes), resulting in a preferred direction of diffusion (anisotropy).
A typical dMRI scan will acquire multiple volumes (or ***angular samples***), each sensitive to a particular ***diffusion direction***.

<video loop="yes" muted="yes" autoplay="yes" controls="yes"><source src="../videos/dMRI-signal-movie.mp4" type="video/mp4"/></video>
Expand Down Expand Up @@ -178,9 +178,10 @@ dmri_dataset.plot_mosaic(index=100, vmax=5000)
Diffusion that exhibits directionality in the same direction as the gradient results in a loss of signal.
As we can see, ***diffusion-weighted*** images consistently drop almost all signal in voxels filled with cerebrospinal fluid because there, water diffusion is free (isotropic) regardless of the direction that is being measured.

We can also see that the images at `index=10` and `index=100` have different gradient strengths.
We can also see that the images at `index=10` and `index=100` have different gradient strength ("*b-value*").
The higher the magnitude of the gradient, the more diffusion that is allowed to occur, indicated by the overall decrease in signal intensity.
There is also a lot more noise.
Stronger gradients yield diffusion maps with substantially lower SNR (signal-to-noise ratio), as well as larger distortions derived from the so-called "*Eddy-currents*".

## Visualizing the gradient information

Our `DWI` object stores the gradient information in the `gradients` attribute.
Expand All @@ -198,11 +199,11 @@ dmri_dataset.gradients.shape
```

We get a $4\times102$.
We get a $4\times102$ -- three spatial coordinates ($b_x$, $b_y$, $b_z$) of the unit-norm "*b-vector*", plus the gradient sensitization magnitude (the "*b-value*"), with a total of 102 different orientations for the case at hand.

```{admonition} Exercise
Try printing the gradient information to see what it contains.
Remember to transpose (`.T`) the array
Remember to transpose (`.T`) the array.
```

**Solution**
Expand Down Expand Up @@ -304,7 +305,7 @@ plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1])
```

`data_train` is a tuple containing all diffusion-weighted volumes and the corresponding gradient table, excluding the left-out, which is stored in `data_test`.
`data_train` is a tuple containing all diffusion-weighted volumes and the corresponding gradient table, excluding the left-out, which is stored in `data_test` (the 11<sup>th</sup> gradient indexed by `10`, in this example).
`data_test[0]` contains the held-out diffusion-weighted volume and `data_test[1]`, the corresponding gradient table.

## Next steps: diffusion modeling
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/models.md
Expand Up @@ -14,7 +14,7 @@ kernelspec:

The proposed method requires inferring a motion-less, reference DW map for a given diffusion orientation for which we want to estimate the misalignment.
Inference of the reference map is achieved by first fitting some diffusion model (which we will draw from [DIPY](https://dipy.org)) using all data, except the particular DW map that is to be aligned.
We call this scheme "leave one gradient out" or "logo".
This data splitting scheme was introduced in the previous section, in [](data.md#the-logo-leave-one-gradient-out-splitter).

All models are required to offer the same API (application programmer interface):

Expand Down
8 changes: 4 additions & 4 deletions docs/tutorial/registration.md
Expand Up @@ -13,14 +13,14 @@ This means that, brain sulci and gyri, the ventricles, subcortical structures, e
That allows, for instance, for **image fusion**, and hence screening both images together (for example, applying some transparency to the one on top) should not give us the perception that they *are not aligned*.

## ANTs - Advanced Normalization ToolS
The ANTs toolbox is widely recognized as a powerful image registration (and *normalization*, which is registration to some standard space) framework.
The [ANTs toolbox](http://stnava.github.io/ANTs/) is widely recognized as a powerful image registration (and *normalization*, which is registration to some standard space) framework.

The output of an image registration process is the *estimated transform* that brings the information in the two images into alignment.
In our case, the head-motion is a rigid-body displacement of the head.
Therefore, a very simple (*linear*) model --an affine 4x4 matrix-- can be used to formalize the *estimated transforms*.
Therefore, a very simple (*linear*) model --an affine $4\times 4$ matrix-- can be used to formalize the *estimated transforms*.

Only very recently, ANTs offers a Python interface to run their tools.
For this reason, we will use the very much consolidated *Nipype* wrapping of the ANTs' command-line interface.
Only very recently, [ANTs offers a Python interface](https://doi.org/10.1101/2020.10.19.20215392) to run their tools.
For this reason, we will use the very much consolidated [*Nipype* wrapping of the ANTs' command-line interface](https://nipype.readthedocs.io/en/latest/api/generated/nipype.interfaces.ants.html#registration).
The code is *almost* as simple as follows:

```Python
Expand Down

0 comments on commit c2f9ec5

Please sign in to comment.