Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] replace session by run to align with BIDS terminology #4214

Merged
merged 1 commit into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/connectivity/connectome_extraction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ The group connectivity is computed using all the subjects timeseries.:
connectivities = measure.fit([time_series_1, time_series_2, ...])
group_connectivity = measure.mean_

Deviations from this mean in the tangent space are provided in the connectivities array and can be used to compare different groups/sessions. In practice, the tangent measure can outperform the correlation and partial correlation measures, especially for noisy or heterogeneous data.
Deviations from this mean in the tangent space are provided in the connectivities array and can be used to compare different groups/runs. In practice, the tangent measure can outperform the correlation and partial correlation measures, especially for noisy or heterogeneous data.


.. topic:: **Full example**
Expand Down
6 changes: 3 additions & 3 deletions doc/decoding/decoding_intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ rule of thumb).

As a general advice :

* To train a decoder on one subject data, try to leave at least one session
* To train a decoder on one subject data, try to leave at least one run
out to have an independent test.

* To train a decoder across different subject data, leaving some subjects data
Expand All @@ -260,9 +260,9 @@ As a general advice :


To improve our first pipeline for the Haxby example, we can leave one entire
session out. To do this, we can pass a ``LeaveOneGroupOut`` cross-validation
run out. To do this, we can pass a ``LeaveOneGroupOut`` cross-validation
object from scikit-learn to our ``Decoder``. Fitting it with the information of
groups=`session_labels` will use one session as test set.
groups=`run_labels` will use one run as test set.

.. note::
Full code example can be found at :
Expand Down
2 changes: 1 addition & 1 deletion doc/decoding/estimator_choice.rst
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ In :class:`nilearn.decoding.DecoderRegressor` you can use some of these objects
* What is done to the data **before** applying the estimator is
often **more important** than the choice of estimator. Typically,
standardizing the data is important, smoothing can often be useful,
and nuisance effects, such as session effect, must be removed.
and nuisance effects, such as run effect, must be removed.

* Many more estimators are available in scikit-learn (see the
`scikit-learn documentation on supervised learning
Expand Down
2 changes: 1 addition & 1 deletion doc/glm/first_level_model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ First level models
.. topic:: **Page summary**

First level models are, in essence, linear regression models run at the level of a single
session or single subject. The model is applied on a voxel-wise basis, either on the whole
run or single subject. The model is applied on a voxel-wise basis, either on the whole
brain or within a region of interest. The timecourse of each :term:`voxel` is regressed against a
predicted :term:`BOLD` response created by convolving the haemodynamic response function (HRF) with
a set of predictors defined within the design matrix.
Expand Down
18 changes: 13 additions & 5 deletions doc/glm/glm_intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,17 @@ A primer on BOLD-fMRI data analysis
What is fMRI ?
--------------

Functional magnetic resonance imaging (:term:`fMRI`) is based on the fact that when local neural activity increases, increases in metabolism and blood flow lead to fluctuations of the relative concentrations of oxyhaemoglobin (the red cells in the blood that carry oxygen) and deoxyhaemoglobin (the same red cells after they have delivered the oxygen). Oxyhaemoglobin and deoxyhaemoglobin have different magnetic properties (diamagnetic and paramagnetic, respectively), and they affect the local magnetic field in different ways. The signal picked up by the MRI scanner is sensitive to these modifications of the local magnetic field. To record cerebral activity during functional sessions, the scanner is tuned to detect this "Blood Oxygen Level Dependent" (:term:`BOLD`) signal.
Functional magnetic resonance imaging (:term:`fMRI`) is based on the fact that when local neural activity increases, increases in metabolism and blood flow lead to fluctuations of the relative concentrations of oxyhaemoglobin (the red cells in the blood that carry oxygen) and deoxyhaemoglobin (the same red cells after they have delivered the oxygen). Oxyhaemoglobin and deoxyhaemoglobin have different magnetic properties (diamagnetic and paramagnetic, respectively), and they affect the local magnetic field in different ways.
The signal picked up by the MRI scanner is sensitive to these modifications of the local magnetic field. To record cerebral activity during functional runs,
the scanner is tuned to detect this "Blood Oxygen Level Dependent" (:term:`BOLD`) signal.

Brain activity is measured in sessions that span several minutes, during which the participant performs some cognitive task and the scanner acquires brain images, typically every 2 or 3 seconds (the time between two successive image acquisition is called the Repetition time, or :term:`TR`).
Brain activity is measured in runs that span several minutes,
during which the participant performs some cognitive task and the scanner acquires brain images,
typically every 2 or 3 seconds (the time between two successive image acquisition is called the Repetition time, or :term:`TR`).

A cerebral MR image provides a 3D image of the brain that can be decomposed into `voxels`_ (the equivalent of pixels, but in 3 dimensions). The series of images acquired during a functional session provides, in each voxel, a time series of positive real number representing the MRI signal, sampled at the :term:`TR`.
A cerebral MR image provides a 3D image of the brain that can be decomposed into `voxels`_ (the equivalent of pixels, but in 3 dimensions).
The series of images acquired during a functional run provides, in each voxel,
a time series of positive real number representing the MRI signal, sampled at the :term:`TR`.

.. _voxels: https://en.wikipedia.org/wiki/Voxel

Expand All @@ -25,7 +31,8 @@ A cerebral MR image provides a 3D image of the brain that can be decomposed into
fMRI data modelling
-------------------

One way to analyze times series consists in comparing them to a *model* built from our knowledge of the events that occurred during the functional session. Events can correspond to actions of the participant (e.g. button presses), presentations of sensory stimui (e.g. sound, images), or hypothesized internal processes (e.g. memorization of a stimulus), ...
One way to analyze times series consists in comparing them to a *model* built from our knowledge of the events that occurred during the functional run.
Events can correspond to actions of the participant (e.g. button presses), presentations of sensory stimui (e.g. sound, images), or hypothesized internal processes (e.g. memorization of a stimulus), ...


.. figure:: ../images/stimulation-time-diagram.png
Expand Down Expand Up @@ -57,7 +64,8 @@ Correlations are computed separately at each :term:`voxel` and a correlation map
.. figure:: ../images/example-spmZ_map.png


In most :term:`fMRI` experiments, several predictors are needed to fully describe the events occurring during the session -- for example, the experimenter may want to distinguish brain activities linked to the perception of auditory stimuli and to button presses. To find the effect specific to each predictor, a multiple `linear regression`_ approach is typically used: all predictors are entered as columns in a *design matrix* and the software finds the linear combination of these columns that best fits the signal. The weights assigned to each predictor by this linear combination are estimates of the contribution of this predictor to the response in the voxel. One can plot this using effect size maps or, maps showing their statistical significance (how unlikely they are under the null hypothesis of no effect).
In most :term:`fMRI` experiments, several predictors are needed to fully describe the events occurring during the run -- for example, the experimenter may want to distinguish brain activities linked to the perception of auditory stimuli and to button presses.
To find the effect specific to each predictor, a multiple `linear regression`_ approach is typically used: all predictors are entered as columns in a *design matrix* and the software finds the linear combination of these columns that best fits the signal. The weights assigned to each predictor by this linear combination are estimates of the contribution of this predictor to the response in the voxel. One can plot this using effect size maps or, maps showing their statistical significance (how unlikely they are under the null hypothesis of no effect).


.. _linear regression: https://en.wikipedia.org/wiki/Linear_regression
Expand Down
4 changes: 2 additions & 2 deletions doc/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -170,13 +170,13 @@ For new-comers, we recommend going through the following examples in the suggest
.. only:: html

.. image:: /auto_examples/00_tutorials/images/thumb/sphx_glr_plot_single_subject_single_run_thumb.png
:alt: Intro to GLM Analysis: a single-session, single-subject fMRI dataset
:alt: Intro to GLM Analysis: a single-run, single-subject fMRI dataset

:ref:`sphx_glr_auto_examples_00_tutorials_plot_single_subject_single_run.py`

.. raw:: html

<div class="sphx-glr-thumbnail-title">Intro to GLM Analysis: a single-session, single-subject fMRI dataset</div>
<div class="sphx-glr-thumbnail-title">Intro to GLM Analysis: a single-run, single-subject fMRI dataset</div>
</div>


Expand Down
2 changes: 1 addition & 1 deletion doc/manipulating_images/input_output.rst
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,7 @@ with ``get_affine()`` and ``get_header()``.
`FSL <https://fsl.fmrib.ox.ac.uk/fsl/>`_ users tend to
prefer this format.
- several 3D matrices representing each time point (single 3D volume) of the
session, stored in set of 3D Nifti or analyse files.
run, stored in set of 3D Nifti or analyse files.
`SPM <https://www.fil.ion.ucl.ac.uk/spm/>`_ users tend
to prefer this format.

Expand Down
4 changes: 2 additions & 2 deletions doc/manipulating_images/manipulating_images.rst
Original file line number Diff line number Diff line change
Expand Up @@ -148,11 +148,11 @@ Relevant functions:
:func:`nilearn.masking.compute_brain_mask`.
* compute a mask from images with a flat background:
:func:`nilearn.masking.compute_background_mask`
* compute for multiple sessions/subjects:
* compute for multiple runs/subjects:
:func:`nilearn.masking.compute_multi_epi_mask`
:func:`nilearn.masking.compute_multi_background_mask`
* apply: :func:`nilearn.masking.apply_mask`
* intersect several masks (useful for multi sessions/subjects): :func:`nilearn.masking.intersect_masks`
* intersect several masks (useful for multi runs/subjects): :func:`nilearn.masking.intersect_masks`
* unmasking: :func:`nilearn.masking.unmask`


Expand Down
26 changes: 13 additions & 13 deletions examples/00_tutorials/plot_decoding_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -255,20 +255,20 @@
# We can speed things up to use all the CPUs of our computer with the
# n_jobs parameter.
#
# The best way to do cross-validation is to respect the structure of
# the experiment, for instance by leaving out full sessions of
# acquisition.
# The best way to do cross-validation is to respect
# the structure of the experiment,
# for instance by leaving out full runs of acquisition.
#
# The number of the session is stored in the CSV file giving the
# behavioral data. We have to apply our session mask, to select only cats
# and faces.
session_label = behavioral["chunks"][condition_mask]
# The number of the run is stored in the CSV file giving
# the behavioral data.
# We have to apply our run mask, to select only cats and faces.
run_label = behavioral["chunks"][condition_mask]

# %%
# The :term:`fMRI` data is acquired by sessions,
# and the noise is autocorrelated in a
# given session. Hence, it is better to predict across sessions when doing
# cross-validation. To leave a session out, pass the cross-validator object
# The :term:`fMRI` data is acquired by runs,
# and the noise is autocorrelated in a given run.
# Hence, it is better to predict across runs when doing cross-validation.
# To leave a run out, pass the cross-validator object
# to the cv parameter of decoder.
from sklearn.model_selection import LeaveOneGroupOut

Expand All @@ -277,7 +277,7 @@
decoder = Decoder(
estimator="svc", mask=mask_filename, standardize="zscore_sample", cv=cv
)
decoder.fit(fmri_niimgs, conditions, groups=session_label)
decoder.fit(fmri_niimgs, conditions, groups=run_label)

print(decoder.cv_scores_)

Expand Down Expand Up @@ -338,7 +338,7 @@
cv=cv,
standardize="zscore_sample",
)
dummy_decoder.fit(fmri_niimgs, conditions, groups=session_label)
dummy_decoder.fit(fmri_niimgs, conditions, groups=run_label)

# Now, we can compare these scores by simply taking a mean over folds
print(dummy_decoder.cv_scores_)
Expand Down
11 changes: 6 additions & 5 deletions examples/00_tutorials/plot_single_subject_single_run.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
"""
Intro to GLM Analysis: a single-session, single-subject fMRI dataset
====================================================================
Intro to GLM Analysis: a single-run, single-subject fMRI dataset
================================================================

In this tutorial, we use a General Linear Model (:term:`GLM`) to compare the
:term:`fMRI` signal during periods of auditory stimulation
Expand All @@ -19,9 +19,10 @@
group which develops the :term:`SPM` software.

According to :term:`SPM` documentation, 96 scans were acquired (repetition time
:term:`TR` = 7s) in one session. The paradigm consisted of alternating periods
of stimulation and rest, lasting 42s each (that is, for 6 scans). The session
started with a rest block. Auditory stimulation consisted of bi-syllabic words
:term:`TR` = 7s) in one run. The paradigm consisted of alternating periods
of stimulation and rest, lasting 42s each (that is, for 6 scans).
The run started with a rest block.
Auditory stimulation consisted of bi-syllabic words
presented binaurally at a rate of 60 per minute.
The functional data starts at scan number 4,
that is the image file ``fM00223_004``.
Expand Down
18 changes: 9 additions & 9 deletions examples/02_decoding/plot_haxby_anova_svm.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@
# Confirm that we now have 2 conditions
print(conditions.unique())

# The number of the session is stored in the CSV file giving the behavioral
# data. We have to apply our session mask, to select only faces and houses.
session_label = behavioral["chunks"][condition_mask]
# The number of the run is stored in the CSV file giving the behavioral data.
# We have to apply our run mask, to select only faces and houses.
run_label = behavioral["chunks"][condition_mask]

# %%
# :term:`ANOVA` pipeline with :class:`nilearn.decoding.Decoder` object
Expand Down Expand Up @@ -76,10 +76,10 @@
# Obtain prediction scores via cross validation
# ---------------------------------------------
# Define the cross-validation scheme used for validation. Here we use a
# LeaveOneGroupOut cross-validation on the session group which corresponds to a
# leave a session out scheme, then pass the cross-validator object to the cv
# parameter of decoder.leave-one-session-out For more details please take a
# look at:
# LeaveOneGroupOut cross-validation on the run group which corresponds to a
# leave a run out scheme, then pass the cross-validator object
# to the cv parameter of decoder.leave-one-session-out.
# For more details please take a look at:
# `Measuring prediction scores using cross-validation\
# <../00_tutorials/plot_decoding_tutorial.html#measuring-prediction-scores-using-cross-validation>`_
from sklearn.model_selection import LeaveOneGroupOut
Expand All @@ -94,8 +94,8 @@
scoring="accuracy",
cv=cv,
)
# Compute the prediction accuracy for the different folds (i.e. session)
decoder.fit(func_img, conditions, groups=session_label)
# Compute the prediction accuracy for the different folds (i.e. run)
decoder.fit(func_img, conditions, groups=run_label)

# Print the CV scores
print(decoder.cv_scores_["face"])
Expand Down
8 changes: 4 additions & 4 deletions examples/02_decoding/plot_haxby_different_estimators.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
categories = stimuli[task_mask].unique()

# extract tags indicating to which acquisition run a tag belongs
session_labels = labels["chunks"][task_mask]
run_labels = labels["chunks"][task_mask]


# Load the fMRI data
Expand Down Expand Up @@ -95,7 +95,7 @@
cv=cv,
)
t0 = time.time()
decoder.fit(fmri_niimgs, classification_target, groups=session_labels)
decoder.fit(fmri_niimgs, classification_target, groups=run_labels)

classifiers_data[classifier_name] = {"score": decoder.cv_scores_}
print(f"{classifier_name:10}: {time.time() - t0:.2f}s")
Expand Down Expand Up @@ -163,7 +163,7 @@
stimuli = stimuli[condition_mask]
assert len(stimuli) == 216
fmri_niimgs_condition = index_img(func_filename, condition_mask)
session_labels = labels["chunks"][condition_mask]
run_labels = labels["chunks"][condition_mask]
categories = stimuli.unique()
assert len(categories) == 2

Expand All @@ -174,7 +174,7 @@
standardize="zscore_sample",
cv=cv,
)
decoder.fit(fmri_niimgs_condition, stimuli, groups=session_labels)
decoder.fit(fmri_niimgs_condition, stimuli, groups=run_labels)
classifiers_data[classifier_name] = {}
classifiers_data[classifier_name]["score"] = decoder.cv_scores_
classifiers_data[classifier_name]["map"] = decoder.coef_img_["face"]
Expand Down
6 changes: 3 additions & 3 deletions examples/02_decoding/plot_haxby_full_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
categories = stimuli[task_mask].unique()

# extract tags indicating to which acquisition run a tag belongs
session_labels = labels["chunks"][task_mask]
run_labels = labels["chunks"][task_mask]

# apply the task_mask to fMRI data (func_filename)
from nilearn.image import index_img
Expand Down Expand Up @@ -102,7 +102,7 @@
scoring="roc_auc",
standardize="zscore_sample",
)
decoder.fit(task_data, classification_target, groups=session_labels)
decoder.fit(task_data, classification_target, groups=run_labels)
mask_scores[mask_name][category] = decoder.cv_scores_[1]
mean = np.mean(mask_scores[mask_name][category])
std = np.std(mask_scores[mask_name][category])
Expand All @@ -116,7 +116,7 @@
standardize="zscore_sample",
)
dummy_classifier.fit(
task_data, classification_target, groups=session_labels
task_data, classification_target, groups=run_labels
)
mask_chance_scores[mask_name][category] = dummy_classifier.cv_scores_[
1
Expand Down