diff --git a/docs/images/project_structure.png b/docs/images/project_structure.png
index 2f67d39..de93e4f 100644
Binary files a/docs/images/project_structure.png and b/docs/images/project_structure.png differ
diff --git a/docs/nipreps/dmriprep.md b/docs/nipreps/dmriprep.md
index d98336f..1eac600 100644
--- a/docs/nipreps/dmriprep.md
+++ b/docs/nipreps/dmriprep.md
@@ -42,4 +42,6 @@ Proposed dMRI pre-processing workflow
Current list of contributors to dMRIPrep
```
+### References
+
[^veraart2019]: Image Processing: Possible Guidelines for Standardization & Clinical Applications. https://www.ismrm.org/19/program_files/MIS15.htm
diff --git a/docs/nipreps/nipreps.md b/docs/nipreps/nipreps.md
index dab23df..e4e2a88 100644
--- a/docs/nipreps/nipreps.md
+++ b/docs/nipreps/nipreps.md
@@ -69,7 +69,6 @@ They can be organized into 3 layers:
```{figure} ../images/nipreps-chart.png
:name: nipreps_chart
-
```
## NiPreps driving principles
@@ -96,7 +95,6 @@ This modularity in the code allows each step to be thoroughly tested. Some examp
Checks whether a function or class method behaves as expected.

-
```
```{tabbed} doctest
@@ -105,7 +103,6 @@ Also checks whether code behaves as expected and serves as an example for how to


-
```
```{tabbed} integration test
@@ -113,14 +110,12 @@ Checks the behaviour of a system (multiple pieces of code).
Can also be used to determine whether the system is behaving suboptimally.

-
```
```{tabbed} build test
Checks that code or software environment can be compiled and deployed.

-
```
### 2. Easy to use
@@ -137,7 +132,6 @@ Thanks to limiting the input dataset to BIDS, manual parameter input is reduced
```{figure} ../images/dwi_reportlet.gif
:name: reportlet
-
```
*NiPreps* are also community-driven.
diff --git a/docs/opensource/community_development.md b/docs/opensource/community_development.md
index 5254452..2112a5f 100644
--- a/docs/opensource/community_development.md
+++ b/docs/opensource/community_development.md
@@ -7,7 +7,8 @@
```
All *NiPreps* are open to community feedback and contributions.
-Contributing to seemingly big projects can feel scary at first. This lesson will help orient you to how an example *NiPreps* project is organized and how to begin making a contribution.
+Contributing to seemingly big projects can feel scary at first.
+This lesson will help orient you to how an example *NiPreps* project is organized and how to begin making a contribution.
## Getting started with GitHub
@@ -79,9 +80,17 @@ Once you get the 👍 from the project maintainers, you are ready to begin your
## Plan ahead
+It's important to start off by defining the problem.
+Have a clear aim for what you want your contribution to achieve.
+While you are making changes, you might discover certain ideas that hadn't come to mind before.
+However, it's important to stay on target and keep your contribution easily digestible.
+
## Making a change
-https://nipreps.org/community/CONTRIBUTING/#making-a-change
+From here, continue following the [Making a change secion](https://nipreps.org/community/CONTRIBUTING/#making-a-change) of the contributing guidelines.
+This section goes into more detail on how to create a local copy of a GitHub project and use version control to keep track of your changes.
+
+You can find a brief summary below:
Make sure your git credentials are configured.
diff --git a/docs/preparation/step0.md b/docs/preparation/step0.md
index b402705..53f7f08 100644
--- a/docs/preparation/step0.md
+++ b/docs/preparation/step0.md
@@ -58,7 +58,7 @@ If you have a working Docker installation and would like to use the workshop's D
```bash
-docker run --rm -p 9999:8888 -e JUPYTER_ENABLE_LAB=yes josephmje/nipreps-book:latest
+docker run --rm -p 9999:8888 -e JUPYTER_ENABLE_LAB=yes nipreps/nipreps-book:latest
```
diff --git a/docs/tutorial/data.md b/docs/tutorial/data.md
index 6bc21ac..47d98ed 100644
--- a/docs/tutorial/data.md
+++ b/docs/tutorial/data.md
@@ -18,7 +18,6 @@ kernelspec:
import warnings
warnings.filterwarnings("ignore")
-
```
Diffusion imaging probes the random, microscopic movement of water molecules by using MRI sequences that are sensitive to the geometry and environmental organization surrounding these protons.
@@ -55,7 +54,6 @@ In the example code below, we've created a class with the name `DWI`.
To simplify class creation, we've also used the magic of a Python library called [`attrs`](https://www.attrs.org/en/stable/).
```{code-cell} python
-
"""Representing data in hard-disk and memory."""
import attr
@@ -86,7 +84,6 @@ class DWI:
def __len__(self):
"""Obtain the number of high-*b* orientations."""
return self.gradients.shape[-1]
-
```
This code implements several *attributes* as well as a *behavior* - the `__len__` *method*.
@@ -95,7 +92,6 @@ The `__len__` method is special in Python, as it will be executed when we call t
Let's test out the `DWI` data structure with some *simulated* data:
```{code-cell} python
-
# NumPy is a fundamental Python library for working with arrays
import numpy as np
@@ -104,7 +100,6 @@ dmri_dataset = DWI(gradients=np.random.normal(size=(4, 64)))
# call Python's built-in len() function
print(len(dmri_dataset))
-
```
The output of this `print()` statement is telling us that this (simulated) dataset has 64 diffusion-weighted samples.
@@ -118,14 +113,12 @@ Please note that the file has been minimized by zeroing all but two diffusion-we
Let's get some insights from it:
```{code-cell} python
-
# import the class from the library
from eddymotion.dmri import DWI
# load the sample file
dmri_dataset = DWI.from_filename("../../data/dwi.h5")
print(len(dmri_dataset))
-
```
In this case, the dataset is reporting to have 102 diffusion-weighted samples.
@@ -134,9 +127,7 @@ Python will automatically generate a summary of this object if we just type the
This pretty-printing of the object informs us about the data and metadata that, together, compose this particular DWI dataset:
```{code-cell} python
-
dmri_dataset
-
```
We'll go over some of the components of `dmri_dataset` through this lesson.
@@ -147,9 +138,7 @@ Let's start out by seeing what the data looks like.
The fully-fledged `DWI` object has a convenience function to plot the dataset:
```{code-cell} python
-
-dmri_dataset.plot_mosaic()
-
+dmri_dataset.plot_mosaic();
```
When calling `plot_mosaic()` without any arguments, the *b=0* reference is plotted.
@@ -167,17 +156,13 @@ Try calling `plot_mosaic` with an index of 10 or 100.
```{code-cell} python
:tags: [hide-cell]
-dmri_dataset.plot_mosaic(index=10, vmax=5000)
-
+dmri_dataset.plot_mosaic(index=10, vmax=5000);
```
-or:
-
```{code-cell} python
:tags: [hide-cell]
-dmri_dataset.plot_mosaic(index=100, vmax=5000)
-
+dmri_dataset.plot_mosaic(index=100, vmax=5000);
```
Diffusion that exhibits directionality in the same direction as the gradient results in a loss of signal.
@@ -192,7 +177,7 @@ Stronger gradients yield diffusion maps with substantially lower SNR (signal-to-
Our `DWI` object stores the gradient information in the `gradients` attribute.
```{admonition} Exercise
-Let's see the shape of the gradient information
+Let's see the shape of the gradient information.
```
**Solution**
@@ -201,7 +186,6 @@ Let's see the shape of the gradient information
:tags: [hide-cell]
dmri_dataset.gradients.shape
-
```
We get a $4\times102$ -- three spatial coordinates ($b_x$, $b_y$, $b_z$) of the unit-norm "*b-vector*", plus the gradient sensitization magnitude (the "*b-value*"), with a total of 102 different orientations for the case at hand.
@@ -217,7 +201,6 @@ Remember to transpose (`.T`) the array.
:tags: [hide-cell]
print(dmri_dataset.gradients.T)
-
```
Later, we'll refer to this array as the gradient table.
@@ -249,7 +232,6 @@ In this case, the splitting strategy is a simple leave-one-out.
Because one "*datapoint*" in our DWI dataset corresponds to one gradient, we will refer to this partitioning of the dataset as *leave-one-gradient-out (LOGO)*:
```{code-cell} python
-
def logo_split(self, index, with_b0=False):
"""
Produce one fold of LOGO (leave-one-gradient-out).
@@ -297,18 +279,15 @@ def logo_split(self, index, with_b0=False):
(train_data, train_gradients),
(dwframe, bframe),
)
-
```
This function is contained in the `DWI` class shown earlier and will allow us to easily partition the dataset as follows:
```{code-cell} python
-
from eddymotion.viz import plot_dwi
data_train, data_test = dmri_dataset.logo_split(10)
-plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1])
-
+plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1]);
```
`data_train` is a tuple containing all diffusion-weighted volumes and the corresponding gradient table, excluding the left-out, which is stored in `data_test` (the 11th gradient indexed by `10`, in this example).
@@ -318,6 +297,8 @@ plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1])
Try printing the shapes of elements in the `data_train` tuple.
```
+**Solution**
+
```{code-cell} python
:tags: [hide-cell]
@@ -329,6 +310,8 @@ print(f"data_train[1] is a gradient table and has {data_train[1].shape} dimensio
Likewise for the left-out gradient, try printing the shapes of elements in the `data_test` tuple.
```
+**Solution**
+
```{code-cell} python
:tags: [hide-cell]
diff --git a/docs/tutorial/intro.md b/docs/tutorial/intro.md
index 0a73eea..c8829f4 100644
--- a/docs/tutorial/intro.md
+++ b/docs/tutorial/intro.md
@@ -20,7 +20,7 @@ While we can address the misalignment, it is really problematic to overcome the
## Objective: Implement a head-motion estimation code
This tutorial focuses on the misalignment problem.
-We will build from existing software (Dipy for diffusion modeling and ANTs for image registration), as well as commonplace Python libraries (NumPy), a software framework for head-motion estimation in diffusion MRI data.
+We will build from existing software (DIPY for diffusion modeling and ANTs for image registration), as well as commonplace Python libraries (NumPy), a software framework for head-motion estimation in diffusion MRI data.
The algorithmic and theoretical foundations of the method are based on an idea first proposed by [Ben-Amitay et al.](https://pubmed.ncbi.nlm.nih.gov/22183784/) and later implemented in *QSIPREP* (see this [OHBM 2019 poster](https://github.com/mattcieslak/ohbm_shoreline/blob/master/cieslakOHBM2019.pdf)).
The idea works as follows:
@@ -68,8 +68,8 @@ pointed at by the variable `data`, and assuming we have a list of rigid-body tra
a potential API would have a `.fit()` and `.predict()` members which run the algorithm (the former) and generate an EM-corrected
DWI (the latter):
-```Python
-from emc import EddyMotionEstimator
+```python
+from eddymotion import EddyMotionEstimator
estimator = EddyMotionEstimator()
estimator.fit(data, model="DTI")
diff --git a/docs/tutorial/models.md b/docs/tutorial/models.md
index bc1b2fc..20f1623 100644
--- a/docs/tutorial/models.md
+++ b/docs/tutorial/models.md
@@ -12,6 +12,14 @@ kernelspec:
# Diffusion modeling
+```{code-cell} python
+:tags: [hide-cell]
+
+import warnings
+
+warnings.filterwarnings("ignore")
+```
+
The proposed method requires inferring a motion-less, reference DW map for a given diffusion orientation for which we want to estimate the misalignment.
Inference of the reference map is achieved by first fitting some diffusion model (which we will draw from [DIPY](https://dipy.org)) using all data, except the particular DW map that is to be aligned.
This data splitting scheme was introduced in the previous section, in [](data.md#the-logo-leave-one-gradient-out-splitter).
@@ -23,9 +31,12 @@ All models are required to offer the same API (application programmer interface)
3. A `predict(gradient_table)` method, which only requires a `GradientTable` as input.
This method produces a prediction of the signal for every voxel in every direction represented in the input `gradient_table`.
-```{code-cell} python
-:tags: [hide-cell]
+```{attention}
+By default, the code running in each Jupyter notebook is its own process.
+We must reload the dataset again to use it in this notebook.
+```
+```{code-cell} python
from eddymotion.dmri import DWI
from eddymotion.viz import plot_dwi
dmri_dataset = DWI.from_filename("../../data/dwi.h5")
@@ -43,7 +54,6 @@ This model will allow to easily test the overall integration of the different co
Also, this model will allow a very straightforward implementation of registration to the *b=0* reference, which is commonly used to initialize the head-motion estimation parameters.
```{code-cell} python
-
class TrivialB0Model:
"""
A trivial model that returns a *b=0* map always.
@@ -70,15 +80,14 @@ class TrivialB0Model:
def predict(self, gradient, **kwargs):
"""Return the *b=0* map."""
return self._S0
-
```
-
The model can easily be initialized as follows (assuming we still have our dataset loaded):
+
```{code-cell} python
model = TrivialB0Model(
- dmri_dataset.gradients,
- S0=dmri_dataset.bzero,
+ dmri_dataset.gradients,
+ S0=dmri_dataset.bzero,
)
```
@@ -97,14 +106,16 @@ plot_dwi(predicted, dmri_dataset.affine, gradient=data_test[1]);
```
As expected, the *b=0* doesn't look very much like the particular left-out direction, but it is a start!
+
```{code-cell} python
plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1]);
```
## Implementing a *regression to the mean* model
-### Exercise
-**Exercise**: Extend the `TrivialB0Model` to produce an average of *all other* diffusion directions, instead of the *b=0*.
+```{admonition} Exercise
+Extend the `TrivialB0Model` to produce an average of *all other* diffusion directions, instead of the *b=0*.
+```
```{code-cell} python
class AverageDWModel:
@@ -126,6 +137,7 @@ class AverageDWModel:
```
**Solution**
+
```{code-cell} python
:tags: [hide-cell]
@@ -147,9 +159,12 @@ class AverageDWModel:
return self._data
```
-### Exercise
-**Exercise**: Use the new `AverageDWModel` you just created.
+```{admonition} Exercise
+ Use the new `AverageDWModel` you just created.
+```
+
**Solution**
+
```{code-cell} python
:tags: [hide-cell]
@@ -169,6 +184,7 @@ We will use the wrap around DIPY's implementation that we distribute with `eddym
```{code-cell} python
:tags: [remove-cell]
+
from tempfile import mkstemp
from pathlib import Path
import requests
@@ -188,6 +204,7 @@ data_train, data_test = dmri_dataset.logo_split(88, with_b0=True)
```
### The model factory
+
To permit flexibility in selecting models, the `eddymotion` package offers a `ModelFactory` that implements the *facade design pattern*.
This means that `ModelFactory` makes it easier for the user to switch between models:
@@ -213,18 +230,23 @@ predicted = model.predict(data_test[1])
```
Now, the predicted map for the particular ***b*** gradient looks much closer to the original:
+
```{code-cell} python
plot_dwi(predicted, dmri_dataset.affine, gradient=data_test[1], black_bg=True);
```
Here's the original DW map, for reference:
+
```{code-cell} python
plot_dwi(data_test[0], dmri_dataset.affine, gradient=data_test[1]);
```
-### Exercise
+```{admonition} Exercise
+Use the `ModelFactory` to initialize a `"DKI"` (diffusion Kurtosis imaging) model.
+```
+
+**Solution**
-**Exercise**: Use the `ModelFactory` to initialize a `"DKI"` (diffusion Kurtosis imaging) model
```{code-cell} python
:tags: [hide-cell]
diff --git a/docs/tutorial/registration.md b/docs/tutorial/registration.md
index 13ed618..c6404ee 100644
--- a/docs/tutorial/registration.md
+++ b/docs/tutorial/registration.md
@@ -1,5 +1,25 @@
+---
+jupytext:
+ formats: md:myst
+ text_representation:
+ extension: .md
+ format_name: myst
+kernelspec:
+ display_name: Python 3
+ language: python
+ name: python3
+---
+
# Image registration (spatial alignment)
+```{code-cell} python
+:tags: [hide-cell]
+
+import warnings
+
+warnings.filterwarnings("ignore")
+```
+
At this point of the tutorial we have covered two of the three initial requirements:
* we have a powerful data structure to access our dMRI dataset with agility, and
@@ -13,6 +33,7 @@ This means that, brain sulci and gyri, the ventricles, subcortical structures, e
That allows, for instance, for **image fusion**, and hence screening both images together (for example, applying some transparency to the one on top) should not give us the perception that they *are not aligned*.
## ANTs - Advanced Normalization ToolS
+
The [ANTs toolbox](http://stnava.github.io/ANTs/) is widely recognized as a powerful image registration (and *normalization*, which is registration to some standard space) framework.
The output of an image registration process is the *estimated transform* that brings the information in the two images into alignment.
@@ -23,11 +44,11 @@ Only very recently, [ANTs offers a Python interface](https://doi.org/10.1101/202
For this reason, we will use the very much consolidated [*Nipype* wrapping of the ANTs' command-line interface](https://nipype.readthedocs.io/en/latest/api/generated/nipype.interfaces.ants.html#registration).
The code is *almost* as simple as follows:
-```Python
+```python
from nipype.interfaces.ants import Registration
registration_framework = Registration(
- fixed_image="reference.nii.gz",
+ fixed_image="reference.nii.gz",
moving_image="left-out-gradient.nii.gz",
from_file="settings-file.json"
)
@@ -81,6 +102,7 @@ The most relevant piece of settings to highlight is the `"transforms"` key, wher
It is beyond the scope of this tutorial to understand ANTs and/or image registration altogether.
## Resampling an image
+
Once we have estimated what is the *transform* that brings two images into alignment, we can *bring* the data in the *moving* image and *move this image* into the *reference*'s grid through *resampling*.
The process works as follows:
@@ -91,7 +113,8 @@ The process works as follows:
We will be using *NiTransforms* to *apply* these transforms we estimate with ANTs -- effectively *resampling* moving images into their reference's grid.
To read a transform produced by ANTs with *NiTransforms*, we use the following piece of code:
-```Python
+
+```python
import nitransforms as nt
xform = nt.io.itk.ITKLinearTransform.from_filename("ants-generated-rigid-xform.mat")
@@ -99,7 +122,7 @@ xform = nt.io.itk.ITKLinearTransform.from_filename("ants-generated-rigid-xform.m
Resampling an image requires two pieces of information: the *reference* image (which provides the new grid where we want to have the data) and the *moving* image which contains the actual data we are interested in:
-```Python
+```python
xform.reference = "reference-image.nii.gz"
resampled = xform.apply("moving-image.nii.gz")
```
diff --git a/docs/tutorial/solution.md b/docs/tutorial/solution.md
index a6ab4eb..bc2aac2 100644
--- a/docs/tutorial/solution.md
+++ b/docs/tutorial/solution.md
@@ -13,7 +13,12 @@ kernelspec:
# Putting everything together
```{code-cell} python
-:tags: [remove-cell]
+:tags: [hide-cell]
+
+import warnings
+
+warnings.filterwarnings("ignore")
+
from tempfile import mkstemp
from pathlib import Path
import requests
@@ -37,7 +42,7 @@ Once we have finalized the main components of the solution, it is time for integ
We now want to iterate over all the *LOGO* partitions of the dataset, generate a synthetic reference through the model of choice, and finally estimate the misalignment between the left-out gradient and the synthetic reference.
This solution, must also abide by the API we have envisioned.
-```Python
+```python
class EddyMotionEstimator:
"""Estimates rigid-body head-motion and distortions derived from eddy-currents."""
@@ -52,9 +57,9 @@ class EddyMotionEstimator:
):
r"""
Estimate head-motion and Eddy currents.
-
+
-
+
"""
align_kwargs = align_kwargs or {}
@@ -74,7 +79,7 @@ class EddyMotionEstimator:
dwmodel = ...
# fit the model
-
+
# generate a synthetic dw volume for the test gradient
predicted = ...
@@ -108,6 +113,7 @@ class EddyMotionEstimator:
```
**Solution**
+
```{code-cell} python
:tags: [hide-cell]
@@ -125,7 +131,7 @@ class EddyMotionEstimator:
):
r"""
Estimate head-motion and Eddy currents.
-
+
Parameters
----------
dwdata : :obj:`~eddymotion.dmri.DWI`
@@ -142,13 +148,13 @@ class EddyMotionEstimator:
corresponding to each gradient map.
See :obj:`~eddymotion.model.ModelFactory` for allowed models (and corresponding
keywords).
-
+
Return
------
affines : :obj:`list` of :obj:`numpy.ndarray`
A list of :math:`4 \times 4` affine matrices encoding the estimated
parameters of the deformations caused by head-motion and eddy-currents.
-
+
"""
align_kwargs = align_kwargs or {}
@@ -205,19 +211,21 @@ class EddyMotionEstimator:
The above code allows us to use our estimator as follows:
-```Python
+```python
from eddymotion.estimator import EddyMotionEstimator
estimated_affines = EddyMotionEstimator.fit(dmri_dataset, model="b0")
```
## What's next? - Testing!
+
Once we have our first implementation functional, we should think of some unit-tests for our code.
**Exercise**: write a unit test for the `TrivialB0Model`.
This test would just make sure that, regardless of the particular partition of the input dataset, a *b=0* map is always returned.
**Solution**: in this solution, we are using `pytest` to integrate the test into a higher-level test suite.
+
```{code-cell} python
:tags: [hide-cell]
@@ -226,16 +234,15 @@ import pytest
@pytest.mark.parametrize("split_index", list(range(30)))
def test_TrivialB0Model(split_index, dmri_dataset):
- model = TrivialB0Model(
- dmri_dataset.gradients,
- S0=dmri_dataset.bzero,
- )
- data_train, data_test = dmri_dataset.logo_split(split_index)
- model.fit(data_train[0])
- predicted = model.predict(data_test[1])
-
- assert np.all(dmri_dataset.bzero == predicted)
+ model = TrivialB0Model(
+ dmri_dataset.gradients,
+ S0=dmri_dataset.bzero,
+ )
+ data_train, data_test = dmri_dataset.logo_split(split_index)
+ model.fit(data_train[0])
+ predicted = model.predict(data_test[1])
+ assert np.all(dmri_dataset.bzero == predicted)
```
## And after testing? - Validation!
@@ -245,10 +252,10 @@ Only after we have both steps secure, we can run benchmarks and evaluations from
The main strategy to validate this software would entail finding/acquiring a special dataset where motion is not present or extremely low, in which we *introduce* a known head-motion pattern with which we are going to challenge our estimator.
Some ideas to achieve this are:
-* a dataset acquired with special sequences that can do prospective motion correction, or
-* a dataset that has been acquired under very controlled settings, with an extremely collaborative participant who was also wearing a personalized mold, or
-* a fully synthetic dataset such as the Fiber Box, or
-* a fully synthetic dataset containing a repeated *b=0* image (this evaluation would be limited to work with the `TrivialB0Model`, for instance).
-***Please head to [the GitHub repository](https://github.com/nipreps/EddyMotionCorrection) and share your ideas with us! We are welcoming new contributors!***
+- a dataset acquired with special sequences that can do prospective motion correction, or
+- a dataset that has been acquired under very controlled settings, with an extremely collaborative participant who was also wearing a personalized mold, or
+- a fully synthetic dataset such as the Fiber Box, or
+- a fully synthetic dataset containing a repeated *b=0* image (this evaluation would be limited to work with the `TrivialB0Model`, for instance).
+***Please head to [the GitHub repository](https://github.com/nipreps/EddyMotionCorrection) and share your ideas with us! We are welcoming new contributors!***