Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] link terms to glossary #4038

Merged
merged 19 commits into from
Oct 13, 2023
Merged
4 changes: 2 additions & 2 deletions doc/building_blocks/manual_pipeline.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Loading non image data: experiment description
-----------------------------------------------

An experiment may need additional information about subjects, sessions or
experiments. In the Haxby experiment, fMRI data are acquired while
experiments. In the Haxby experiment, :term:`fMRI` data are acquired while
presenting different category of pictures to the subject (face, cat, ...)
and the goal of this experiment is to predict which category is presented
to the subjects from the brain activation.
Expand Down Expand Up @@ -216,7 +216,7 @@ Here we want to see the discriminating weights of some voxels.
Visualizing results
===================

Again the visualization code is simple. We can use an fMRI slice as a
Again the visualization code is simple. We can use an :term:`fMRI` slice as a
background and plot the weights. Brighter points have a higher
discriminating weight.

Expand Down
6 changes: 3 additions & 3 deletions doc/building_blocks/neurovault.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ The default values for the ``collection_terms`` and ``image_terms`` parameters
filter out empty collections, and exclude an image if one of the following is
true:

- it is not in MNI space.
- it is not in :term:`MNI` space.
- its metadata field "is_valid" is cleared.
- it is thresholded.
- its map type is one of "ROI/mask", "anatomical", or "parcellation".
Expand Down Expand Up @@ -146,7 +146,7 @@ Using a filter rather than a dictionary, the first example becomes:
Even if you specify a filter as a function, the default filters for
``image_terms`` and ``collection_terms`` still apply; pass an empty
dictionary if you want to disable them. Without ``image_terms={}`` in the
call above, parcellations, images not in MNI space, etc. would be still be
call above, parcellations, images not in :term:`MNI` space, etc. would be still be
filtered out.


Expand Down Expand Up @@ -186,7 +186,7 @@ Neurosynth annotations

It is also possible to ask Neurosynth to annotate the maps found on
Neurovault. Neurosynth is a platform for large-scale, automated
synthesis of fMRI data. It can be used to perform decoding. You can
synthesis of :term:`fMRI` data. It can be used to perform decoding. You can
learn more about Neurosynth at http://www.neurosynth.org.

Neurosynth was introduced in [2]_.
Expand Down
2 changes: 1 addition & 1 deletion doc/connectivity/connectome_extraction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Sparse inverse covariance for functional connectomes

Functional connectivity can be obtained by estimating a covariance
(or correlation) matrix for signals from different brain
regions decomposed, for example on resting-state or naturalistic-stimuli datasets.
regions decomposed, for example on :term:`resting-state` or naturalistic-stimuli datasets.
The same information can be represented as a weighted graph,
vertices being brain regions, weights on edges being covariances
(gaussian graphical model). However, coefficients in a covariance matrix
Expand Down
4 changes: 2 additions & 2 deletions doc/connectivity/functional_connectomes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ Probabilistic atlases
The definition of regions as by a continuous probability map captures
better our imperfect knowledge of boundaries in brain images (notably
because of inter-subject registration errors). One example of such an
atlas well suited to resting-state or naturalistic-stimuli data analysis is
atlas well suited to :term:`resting-state` or naturalistic-stimuli data analysis is
the `MSDL atlas
<https://team.inria.fr/parietal/18-2/spatial_patterns/spatial-patterns-in-resting-state/>`_
(:func:`nilearn.datasets.fetch_atlas_msdl`).
Expand Down Expand Up @@ -214,7 +214,7 @@ the edges capture interactions between them, this graph is a "functional
connectome".

We can display it with the :func:`nilearn.plotting.plot_connectome`
function that take the matrix, and coordinates of the nodes in MNI space.
function that take the matrix, and coordinates of the nodes in :term:`MNI` space.
In the case of the MSDL atlas
(:func:`nilearn.datasets.fetch_atlas_msdl`), the CSV file readily comes
with :term:`MNI` coordinates for each region (see for instance example:
Expand Down
12 changes: 6 additions & 6 deletions doc/connectivity/parcellating.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ Data loading: movie-watching data

.. currentmodule:: nilearn.datasets

Clustering is commonly applied to resting-state data, but any brain
Clustering is commonly applied to :term:`resting-state` data, but any brain
functional data will give rise of a functional parcellation, capturing
intrinsic brain architecture in the case of resting-state data.
intrinsic brain architecture in the case of :term:`resting-state` data.
In the examples, we use naturalistic stimuli-based movie watching
brain development data downloaded with the function
:func:`fetch_development_fmri` (see :ref:`loading_data`).
Expand Down Expand Up @@ -79,7 +79,7 @@ Ward's algorithm is a hierarchical clustering algorithm: it
recursively merges voxels, then clusters that have similar signal
(parameters, measurements or time courses).

**Caching** In practice the implementation of Ward clustering first
**Caching** In practice the implementation of :term:`Ward clustering` first
computes a tree of possible merges, and then, given a requested number of
clusters, breaks apart the tree at the right level.

Expand All @@ -92,7 +92,7 @@ used for caching.

.. note::

The Ward clustering computing 1000 parcels runs typically in about 10
The :term:`Ward clustering` computing 1000 parcels runs typically in about 10
seconds. Admittedly, this is very fast.

.. note::
Expand All @@ -105,7 +105,7 @@ used for caching.

* A function :func:`nilearn.regions.connected_label_regions` which can be useful to
break down connected components into regions. For instance, clusters defined using
KMeans whereas it is not necessary for Ward clustering due to its
KMeans whereas it is not necessary for :term:`Ward clustering` due to its
spatial connectivity.


Expand All @@ -117,7 +117,7 @@ Using and visualizing the resulting parcellation
Visualizing the parcellation
-----------------------------

The labels of the parcellation are found in the ``labels_img_`` attribute of
The labels of the :term:`parcellation` are found in the ``labels_img_`` attribute of
the :class:`nilearn.regions.Parcellations` object after fitting it to the data
using *ward.fit*. We directly use the result for visualization.

Expand Down
2 changes: 1 addition & 1 deletion doc/connectivity/region_extraction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Region Extraction for better brain parcellations
Fetching movie-watching based functional datasets
=================================================

We use a naturalistic stimuli based movie-watching functional connectivity dataset
We use a naturalistic stimuli based movie-watching :term:`functional connectivity` dataset
of 20 subjects, which is already preprocessed, downsampled to 4mm isotropic resolution, and publicly available at
`<https://osf.io/5hju4/files/>`_. We use utilities
:func:`fetch_development_fmri` implemented in nilearn for automatic fetching of this
Expand Down
4 changes: 2 additions & 2 deletions doc/decoding/searchlight.rst
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ equivalent to a permuted F-test by setting the argument
``two_sided_test`` to ``True``. In the example above, we do perform a two-sided
test but add back the sign of the effect at the end using the t-scores obtained
on the original (non-permuted) data. Thus, we can perform two one-sided tests
(a given contrast and its opposite) for the price of one single run.
(a given :term:`contrast` and its opposite) for the price of one single run.
The example results can be interpreted as follows: viewing faces significantly
activates the Fusiform Face Area as compared to viewing houses, while viewing
houses does not reveal significant supplementary activations as compared to
Expand All @@ -224,7 +224,7 @@ viewing faces.
assuming that nothing happens (i.e. under the null hypothesis).
Therefore, a small *p-value* indicates that there is a small chance
of getting this data if no real difference existed, so the observed
voxel must be significant.
:term:`voxel` must be significant.

.. [2]

Expand Down
2 changes: 1 addition & 1 deletion doc/decoding/space_net.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Related example

Empirical comparisons using this method have been removed from
documentation in version 0.7 to keep its computational cost low. You can
easily try SpaceNet instead of FREM in :ref:`mixed gambles study <sphx_glr_auto_examples_02_decoding_plot_mixed_gambles_frem.py>` or :ref:`Haxby study <sphx_glr_auto_examples_02_decoding_plot_haxby_frem.py>`.
easily try SpaceNet instead of :term:`FREM` in :ref:`mixed gambles study <sphx_glr_auto_examples_02_decoding_plot_mixed_gambles_frem.py>` or :ref:`Haxby study <sphx_glr_auto_examples_02_decoding_plot_haxby_frem.py>`.

.. seealso::

Expand Down
2 changes: 1 addition & 1 deletion doc/developers/group_sparse_covariance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ stopped.
This technique it is only a way to stop iterating based on the
estimate value instead of the criterion value. It does *not* ensure a
given uncertainty on the estimate. This has been tested on synthetic
and real fMRI data: using two different starting points leads to two
and real :term:`fMRI` data: using two different starting points leads to two
estimates that can differ (in max norm) by more than the threshold
(see next paragraph). However, it has the same property as the duality
gap criterion: quickly converging cases use fewer iterations than
Expand Down
14 changes: 7 additions & 7 deletions doc/glm/first_level_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,21 @@ First level models

First level models are, in essence, linear regression models run at the level of a single
session or single subject. The model is applied on a voxel-wise basis, either on the whole
brain or within a region of interest. The timecourse of each voxel is regressed against a
predicted BOLD response created by convolving the haemodynamic response function (HRF) with
brain or within a region of interest. The timecourse of each :term:`voxel` is regressed against a
predicted :term:`BOLD` response created by convolving the haemodynamic response function (HRF) with
a set of predictors defined within the design matrix.


HRF models
==========

Nilearn offers a few different HRF models including the commonly used double-gamma SPM model ('spm')
Nilearn offers a few different :term:`HRF` models including the commonly used double-gamma :term:`SPM` model ('spm')
and the model shape proposed by G. Glover ('glover'), both allowing the option of adding time and
dispersion derivatives. The addition of these derivatives allows to better model any uncertainty in
timing information. In addition, an FIR (finite impulse response, 'fir') model of the HRF is also available.
timing information. In addition, an :term:`FIR` (finite impulse response, 'fir') model of the :term:`HRF` is also available.

In order to visualize the predicted regressor prior to plugging it into the linear model, use the
function :func:`nilearn.glm.first_level.compute_regressor`, or explore the HRF plotting
function :func:`nilearn.glm.first_level.compute_regressor`, or explore the :term:`HRF` plotting
example :ref:`sphx_glr_auto_examples_04_glm_first_level_plot_hrf.py`.


Expand Down Expand Up @@ -88,7 +88,7 @@ Fitting a first level model
===========================

The :class:`nilearn.glm.first_level.FirstLevelModel` class provides the tools to fit the linear model to
the fMRI data. The :func:`nilearn.glm.first_level.FirstLevelModel.fit()` function takes the fMRI data
the :term:`fMRI` data. The :func:`nilearn.glm.first_level.FirstLevelModel.fit()` function takes the fMRI data
and design matrix as input and fits the GLM. Like other Nilearn functions,
:func:`nilearn.glm.first_level.FirstLevelModel.fit()` accepts file names as input, but can also
work with :nipy:`NiftiImage objects <nibabel/nibabel_images.html>`. More information about
Expand All @@ -102,7 +102,7 @@ input formats is available :ref:`here <loading_data>` ::
Computing contrasts
-------------------

To get more interesting results out of the GLM model, contrasts can be computed between regressors of interest.
To get more interesting results out of the :term:`GLM` model, contrasts can be computed between regressors of interest.
The :func:`nilearn.glm.first_level.FirstLevelModel.compute_contrast` function can be used for that. First,
the contrasts of interest must be defined. In the spm_multimodal_fmri dataset referenced above, subjects are
presented with 'normal' and 'scrambled' faces. The basic contrasts that can be constructed are the main effects
Expand Down