Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion data/sub-02_dwi.bval

This file was deleted.

3 changes: 0 additions & 3 deletions data/sub-02_dwi.bvec

This file was deleted.

Binary file removed data/sub-02_dwi.nii.gz
Binary file not shown.
2 changes: 0 additions & 2 deletions docs/_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@ copyright: "2021"
sphinx:
config:
html_extra_path: ["assets"]
extra_extensions:
- sphinx_exercise

# Execution settings
execute:
Expand Down
165 changes: 6 additions & 159 deletions docs/extra/nifti.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ kernelspec:
## Investigating NIfTI images with *NiBabel*

[NiBabel](https://nipy.org/nibabel/) is a Python package for reading and writing neuroimaging data.
To learn more about how NiBabel handles NIfTIs, check out the [Working with NIfTI images](https://nipy.org/nibabel/nifti_images.html) page of the NiBabel documentation.
To learn more about how NiBabel handles NIfTIs, check out the [Working with NIfTI images](https://nipy.org/nibabel/nifti_images.html) page of the NiBabel documentation, from which this lesson is adapted from.

```{code-cell} python
import nibabel as nib
Expand Down Expand Up @@ -105,178 +105,25 @@ dwi_affine = dwi_img.affine
dwi_affine
```

To explain this concept, recall that we referred to coordinates in our data as *(i,j,k)* coordinates such that:
To explain this concept, recall that we referred to coordinates in our data as *voxel coordinates (i,j,k)* coordinates such that:

* i is the first dimension of `dwi_data`
* j is the second dimension of `dwi_data`
* k is the third dimension of `dwi_data`

Although this tells us how to access our data in terms of voxels in a 3D volume, it doesn't tell us much about the actual dimensions in our data (centimetres, right or left, up or down, back or front).
The affine matrix allows us to translate between *voxel coordinates* in (i,j,k) and *world space coordinates* in (left/right,bottom/top,back/front).
The affine matrix allows us to translate between *voxel coordinates* and *world space coordinates* in (left/right,bottom/top,back/front).

An important thing to note is that in reality in which order you have:

* left/right
* bottom/top
* back/front

Depends on how you've constructed the affine matrix, but for the data we're dealing with it always refers to:
depends on how the affine matrix is constructed. For the data we're dealing with, it always refers to:

* Right
* Anterior
* Superior

Applying the affine matrix is done through using a *linear map* (matrix multiplication) on voxel coordinates (defined in `dwi_data`).

## Diffusion gradient schemes

In addition to the acquired diffusion images, two files are collected as part of the diffusion dataset.
These files correspond to the gradient amplitude (b-values) and directions (b-vectors) of the diffusion measurement and are named with the extensions `.bval` and `.bvec` respectively.

```{code-cell} python
bvec = "../../data/sub-01_dwi.bvec"
bval = "../../data/sub-01_dwi.bval"
```

The b-value is the diffusion-sensitizing factor, and reflects both the timing & strength of the gradients (measured in s/mm^2) used to acquire the diffusion-weighted images.

```{code-cell} python
!cat ../../data/sub-01_dwi.bval
```

The b-vector corresponds to the direction of the diffusion sensitivity. Each row corresponds to a value in the i, j, or k axis. The numbers are combined column-wise to get an [i j k] coordinate per DWI volume.

```{code-cell} python
!cat ../../data/sub-01_dwi.bvec
```

Together these two files define the dMRI measurement as a set of gradient directions and corresponding amplitudes.

In our example data, we see that 2 b-values were chosen for this scanning sequence.
The first few images were acquired with a b-value of 0 and are typically referred to as b=0 images.
In these images, no diffusion gradient is applied.
These images don't hold any diffusion information and are used as a reference (for head motion correction or brain masking) since they aren't subject to the same types of scanner artifacts that affect diffusion-weighted images.

All of the remaining images have a b-value of 1000 and have a diffusion gradient associated with them.
Diffusion that exhibits directionality in the same direction as the gradient results in a loss of signal.
With further processing, the acquired images can provide measurements which are related to the microscopic changes and estimate white matter trajectories.

<video loop="yes" muted="yes" autoplay="yes" controls="yes"><source src="../videos/dMRI-signal-movie.mp4" type="video/mp4"/></video>

We'll use some functions from [Dipy](https://dipy.org), a Python package for pre-processing and analyzing diffusion data.
After reading the `.bval` and `.bvec` files with the `read_bvals_bvecs()` function, we get both in a numpy array. Notice that the `.bvec` file has been transposed so that the i, j, and k-components are in column format.

```{code-cell} python
from dipy.io import read_bvals_bvecs

gt_bvals, gt_bvecs = read_bvals_bvecs(bval, bvec)
gt_bvecs
```

Below is a plot of all of the diffusion directions that we've acquired.

```{code-cell} python
import matplotlib.pyplot as plt

fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.scatter(gt_bvecs.T[0], gt_bvecs.T[1], gt_bvecs.T[2])
plt.show()
```

It is important to note that in this format, the diffusion gradients are provided with respect to the image axes, not in real or scanner coordinates. Simply reformatting the image from sagittal to axial will effectively rotate the b-vectors, since this operation changes the image axes. Thus, a particular bvals/bvecs pair is only valid for the particular image that it corresponds to.

The diffusion gradient is critical for later analyzing the data

```{code-cell} python
:tags: [output_scroll]

import numpy as np

if dwi_affine.shape == (4, 4):
dwi_affine = dwi_affine[:3, :3]

rotated_bvecs = dwi_affine[np.newaxis, ...].dot(gt_bvecs.T)[0].T
rotated_bvecs
```

```{code-cell} python

fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
ax.scatter(rotated_bvecs.T[0], rotated_bvecs.T[1], rotated_bvecs.T[2])
plt.show()
```

Inspired by MRtrix3 and proposed in the [BIDS spec](https://github.com/bids-standard/bids-specification/issues/349), dMRIPrep also creates an optional `.tsv` file where the diffusion gradients are reported in scanner coordinates as opposed to image coordinates.
The [i j k] values reported earlier are recalculated in [R A S].

```{code-cell} python
:tags: [output_scroll]

rasb = np.c_[rotated_bvecs, gt_bvals]

rasb
```

We can write out this `.tsv` to a file.

```{code-cell} python
np.savetxt(fname="../../data/sub-01_rasb.tsv", delimiter="\t", X=rasb)
```

```{code-cell} python
from dipy.core.gradients import gradient_table

gtab = gradient_table(gt_bvals, rotated_bvecs)
```

## Brain Masking

One of the first things we do before image registration is brain extraction, separating any non-brain material from brain tissue.
This is done so that our algorithms aren't biased or distracted by whatever is in non-brain material and we don't spend extra time analyzing things we don't care about

As mentioned before, the b0s are a good reference scan for doing brain masking. lets index them.

```{code-cell} python
gtab.b0s_mask
```

```{code-cell} python
bzero = dwi_data[:, :, :, gtab.b0s_mask]
```

skullstrip
5 volumes in b0

```{code-cell} python
bzero.shape
```

take median image

```{code-cell} python

median_bzero = np.median(bzero, axis=-1)
```

```{code-cell} python
from dipy.segment.mask import median_otsu

b0_mask, mask = median_otsu(median_bzero, median_radius=2, numpass=1)
```

```{code-cell} python
from dipy.core.histeq import histeq

sli = median_bzero.shape[2] // 2
plt.subplot(1, 3, 1).set_axis_off()
plt.imshow(histeq(median_bzero[:, :, sli].astype("float")).T,
cmap="gray", origin="lower")

plt.subplot(1, 3, 2).set_axis_off()
plt.imshow(mask[:, :, sli].astype("float").T, cmap="gray", origin="lower")

plt.subplot(1, 3, 3).set_axis_off()
plt.imshow(histeq(b0_mask[:, :, sli].astype("float")).T,
cmap="gray", origin="lower")
```
Applying the affine matrix is done through using a *linear map* (matrix multiplication) on the voxel coordinates (defined in `dwi_data`).
Binary file modified docs/images/binder_link.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/nipreps/dmriprep.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ dMRIPrep is an analysis-agnostic tool that addresses the challenge of robust and

## Development

In his 2019 ISMRM talk [^veraart2019], Jelle Veraart polled the developers of some of the major dMRI analysis software packages.
In his 2019 ISMRM talk [^veraart2019], Dr. J. Veraart polled the developers of some of the major dMRI analysis software packages.
The goal was to understand how much consensus there was in the field on whether to proceed with certain pre-processing steps.
The poll showed that consensus is within reach.
However, different pre-processing steps have varying levels of support.
Expand Down
2 changes: 1 addition & 1 deletion docs/nipreps/nipreps.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ This eases the burden of maintaining these tools but also helps focus on standar
*NiPreps* only support BIDS-Derivatives as output and so are agnostic to subsequent analysis.

*NiPreps* also aim to be robust in their codebase.
The pipelines are modular and rely on widely-used tools such as AFNI, ANTs, FreeSurfer, FSL, Nilearn, or Dipy and are extensible via plug-ins.
The pipelines are modular and rely on widely-used tools such as AFNI, ANTs, FreeSurfer, FSL, Nilearn, or DIPY and are extensible via plug-ins.
This modularity in the code allows each step to be thoroughly tested. Some examples of tests performed on different parts of the pipeline are shown below:

```{tabbed} unittest
Expand Down
6 changes: 5 additions & 1 deletion docs/preparation/step0.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,22 @@
# Before we start: How to follow this tutorial

This tutorial contains a mix of lecture-based and interactive components.
The interactive components can be executed in your personal computer locally or using the Binder service.
You are welcome to follow along however you like, whether you just want to listen or code along with us.
Each of the chapters involving code are actually Jupyter notebooks!

## Using Binder

Clicking the Binder icon in the top right corner will launch an interactive computing environment with all of the necessary software packages pre-installed.

```{figure} images/binder_link.png
```{figure} ../images/binder_link.png
:name: binder_link

Binder Link
```

## Local installation ("bare-metal")

If you would like to follow along using your own setup and you have a functional Python environment, you can:

```bash
Expand All @@ -30,3 +33,4 @@ jupyter lab
```

## Local installation ("docker containers")

2 changes: 0 additions & 2 deletions docs/references.bib

This file was deleted.

Loading