Skip to content

Latest commit

 

History

History
497 lines (372 loc) · 17.8 KB

usage.rst

File metadata and controls

497 lines (372 loc) · 17.8 KB

Running XCP-D

Warning

XCP-D may not run correctly on M1 chips.

Execution and Input Formats

The XCP-D workflow takes fMRIPRep, NiBabies and HCP outputs in the form of BIDS derivatives. In these examples, we use an fmriprep output directory.

The outputs are required to include at least anatomical and functional outputs with at least one preprocessed BOLD image. Additionally, each of these should be in directories that can be parsed by the BIDS online validator (even if it is not BIDS valid - we do not require BIDS valid directories). The directories must also include a valid dataset_description.json.

The exact command to run in xcp_d depends on the installation method and data that needs to be processed. We start first with the bare-metal installation_manually_prepared_environment installation, as the command line is simpler. XCP-D can be executed on the command line, processesing fMRIPrep outputs, using the following command-line structure, for example:

xcp_d <fmriprep_dir> <output_dir> --cifti --despike  --head_radius 40 -w /wkdir --smoothing 6

However, we strongly recommend using installation_container_technologies. Here, the command-line will be composed of a preamble to configure the container execution, followed by the XCP-D command-line options as if you were running it on a bare-metal installation.

Command-Line Arguments

--band-stop-min : @after

For the "notch" filter option, we recommend the following values.

Recommended values, based on participant age
Age Range (years) Recommended Value (bpm)
< 1 30
1 - 2 25
2 - 6 20
6 - 12 15
12 - 18 12
19 - 65 12
65 - 80 12
> 80 10
--band-stop-max : @after

For the "notch" filter option, we recommend the following values.

Recommended values, based on participant age
Age Range (years) Recommended Value (bpm)
< 1 60
1 - 2 50
2 - 6 35
6 - 12 25
12 - 18 20
19 - 65 18
65 - 80 28
> 80 30
--warp-surfaces-native2std : @after
The surface files that are generated by the workflow
Filename Description
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_pial.surf.gii The gray matter / pial matter border.
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_smoothwm.surf.gii The smoothed gray matter / white matter border for the cortex.
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_midthickness.surf.gii The midpoints between wm and pial surfaces. This is derived from the FreeSurfer graymid (mris_expand with distance=0.5 applied to the WM surfs).
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_inflated.surf.gii An inflation of the midthickness surface (useful for visualization). This file is only created if the input type is "hcp" or "dcan".
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_midthickness.surf.gii The midpoints between wm and pial surfaces. This is created by averaging the coordinates from the wm and pial surfaces.
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_inflated.surf.gii An inflation of the midthickness surface (useful for visualization). This is derived from the HCP midthickness file. This file is only created if the input type is "fmriprep" or "nibabies".
<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_vinflated.surf.gii A very-inflated midthicknesss surface (also for visualization). This is derived from the HCP midthickness file. This file is only created if the input type is "fmriprep" or "nibabies".

Filtering Inputs with BIDS Filter Files

XCP-D allows users to choose which preprocessed files will be post-processed with the --bids-filter-file parameter. This argument must point to a JSON file, containing filters that will be fed into PyBIDS.

The keys in this JSON file are unique to XCP-D. They are our internal terms for different inputs that will be selected from the preprocessed dataset.

"bold" determines which preprocessed BOLD files will be chosen. You can set a number of entities here, including "session", "task", "space", "resolution", and "density". We recommend NOT setting the datatype, suffix, or file extension in the filter file.

Warning

We do not recommend applying additional filters to any of the following fields. We have documented them here, for edge cases where they might be useful, but the only field that most users should filter is "bold".

"t1w" selects a native T1w-space, preprocessed T1w file.

"t2w" selects a native T1w-space, preprocessed T2w file.

"anat_dseg" selects a native T1w-space segmentation file. This file is primarily used for figures.

"anat_brainmask" selects a native T1w-space brain mask.

"anat_to_template_xfm" selects a transform from T1w (or T2w, if no T1w image is available) space to standard space. The standard space that will be used depends on the "bold" files that are selected.

"template_to_anat_xfm" selects a transform from standard space to T1w/T2w space. Again, the standard space is determined based on other files.

Example bids-filter-file

In this example file, we only run XCP-D on resting-state preprocessed BOLD runs from session "01".

{
   "bold": {
      "session": ["01"],
      "task": ["rest"]
   }
}

Running XCP-D via Docker containers

If you are running XCP-D locally, we recommend Docker. See installation_container_technologies for installation instructions.

In order to run Docker smoothly, it is best to prevent permissions issues associated with the root file system. Running Docker as user on the host will ensure the ownership of files written during the container execution.

A Docker container can be created using the following command:

docker run --rm -it \
   -v /dset/derivatives/fmriprep:/fmriprep:ro \
   -v /tmp/wkdir:/work:rw \
   -v /dset/derivatives:/out:rw \
   -v /dset/derivatives/freesurfer:/freesurfer:ro \  # Necessary for fMRIPrep versions <22.0.2
   pennlinc/xcp_d:latest \
   /fmriprep /out participant \
   --cifti --despike --head_radius 40 -w /work --smoothing 6

Running XCP-D via Singularity containers

If you are computing on an HPC (High-Performance Computing), we recommend using Singularity. See installation_container_technologies for installation instructions.

Warning

XCP-D (and perhaps other Docker-based Singularity images) may not work with Singularity <=2.4. We strongly recommend using Singularity 3+. For more information, see this xcp_d issue and this Singularity issue.

If the data to be preprocessed is also on the HPC or a personal computer, you are ready to run xcp_d.

singularity run --cleanenv xcp_d.simg \
    path/to/data/fmri_dir  \
    path/to/output/dir \
    --participant-label label

Relevant aspects of the $HOME directory within the container

By default, Singularity will bind the user's $HOME directory on the host into the /home/$USER directory (or equivalent) in the container. Most of the time, it will also redefine the $HOME environment variable and update it to point to the corresponding mount point in /home/$USER. However, these defaults can be overwritten in your system. It is recommended that you check your settings with your system's administrator. If your Singularity installation allows it, you can work around the $HOME specification, combining the bind mounts argument (-B) with the home overwrite argument (--home) as follows:

singularity run -B $HOME:/home/xcp \
    --home /home/xcp \
    --cleanenv xcp_d.simg \
    <xcp_d arguments>

Therefore, once a user specifies the container options and the image to be run, the command line options are the same as the bare-metal installation.

Custom Confounds

XCP-D can include custom confounds in its denoising. Here, you can supply your confounds, and optionally add these to a confound strategy already supported in XCP-D.

To add custom confounds to your workflow, use the --custom-confounds parameter, and provide a folder containing the custom confounds files for all of the subjects, sessions, and tasks you plan to post-process.

The individual confounds files should be tab-delimited, with one column for each regressor, and one row for each volume in the data being denoised.

Including Signal Regressors

Let's say you have some nuisance regressors that are not necessarily orthogonal to some associated regressors that are ostensibly signal. For example, if you ran tedana on multi-echo data, you would have a series of "rejected" (noise) and "accepted" (signal) ICA components. Because tedana uses a spatial ICA, these components' time series are not necessarily independent, and there can be shared variance between them. If you want to properly denoise your data using the noise components, you need to account for the variance they share with the signal components.

XCP-D allows users to include the signal regressors in their custom confounds file, so that the noise regressors can be orthogonalized with respect to the signal regressors.

For more information about different types of denoising, see tedana's documentation, this NeuroStars topic, and/or Pruim et al. (2015).

So how do we implement this in XCP-D? In order to define regressors that should be treated as signal, and thus orthogonalize the noise regressors with respect to known signals instead of regressing them without modification, you should include those regressors in your custom confounds file, with column names starting with signal__ (lower-case "signal", followed by two underscores).

Important

XCP-D will automatically orthogonalize noise regressors with respect to signal regressors with any nuisance-regressor option that uses AROMA regressors (e.g., aroma or aroma_gsr).

Task Regression

If you want to regress task-related signals out of your data, you can use the custom confounds option to do it.

Here we document how to include task effects as confounds.

Tip

The basic approach to task regression is to convolve your task regressors with an HRF, then save those regressors to a custom confounds file.

Warning

This method is still under development.

We recommend using a tool like Nilearn to generate convolved regressors from BIDS events files. See this example <https://nilearn.github.io/stable/auto_examples/04_glm_first_level/\ plot_design_matrix.html#create-design-matrices>.

import numpy as np
from nilearn.glm.first_level import make_first_level_design_matrix

N_VOLUMES = 200
TR = 0.8
frame_times = np.arange(N_VOLUMES) * TR
events_df = pd.read_table("sub-X_ses-Y_task-Z_run-01_events.tsv")

task_confounds = make_first_level_design_matrix(
   frame_times,
   events_df,
   drift_model=None,
   add_regs=None,
   hrf_model="spm",
)

# The design matrix will include a constant column, which we should drop
task_confounds = task_confounds.drop(columns="constant")

# Assuming that the fMRIPrep confounds file is named
# "sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
# we will name the custom confounds file the same thing, in a separate folder.
task_confounds.to_csv(
   "/my/project/directory/custom_confounds/sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
   sep="\t",
   index=False,
)

Then, when you run XCP-D, you can use the flag --custom_confounds /my/project/directory/custom_confounds.

Command Line XCP-D with Custom Confounds

Last, supply the file to xcp_d with the --custom_confounds option. --custom_confounds should point to the directory where this file exists, rather than to the file itself; XCP-D will identify the correct file based on the filename, which should match the name of the preprocessed BOLD data's associated confounds file. You can simultaneously perform additional confound regression by including, for example, --nuisance-regressors 36P in the call.

singularity run --cleanenv -B /my/project/directory:/mnt xcpabcd_latest.simg \
   /mnt/input/fmriprep \
   /mnt/output/directory \
   participant \
   --participant_label X \
   --task-id Z \
   --nuisance-regressors 36P \
   --custom_confounds /mnt/custom_confounds

Custom Parcellations

While XCP-D comes with many built in parcellations, we understand that many users will want to use custom parcellations. If you use the -cifti option, you can use the Human Connectome Project's wb_command to generate the time series:

wb_command \
   -cifti-parcellate \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.dtseries.nii \
   your_parcels.dlabel \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii

After this, if one wishes to have a connectivity matrix:

wb_command \
   -cifti-correlation \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.pconn.nii

More information can be found at the HCP documentation.

If you use the default NIFTI processing pipeline, you can use Nilearn's NiftiLabelsMasker <https://nilearn.github.io/stable/auto_examples/06_manipulating_images/\ plot_nifti_labels_simple.html#extracting-signals-from-brain-regions-using-the-niftilabelsmasker>

Advanced Applications

XCP-D can be used in conjunction with other tools, such as tedana and phys2denoise. We have attempted to document these applications with working code in PennLINC/xcp_d-examples. If there is an application you think would be useful to document, please open an issue in that repository.

Preprocessing Requirements for XCP-D

XCP-D is designed to ingest data from a variety of different preprocessing pipelines. However, each supported pipeline must be explicitly supported within XCP-D in order for the workflow to select the correct files.

Additionally, XCP-D may require files that are only created with specific settings in the preprocessing pipelines.

fMRIPrep/Nibabies

In order to work on fMRIPrep or Nibabies derivatives, XCP-D needs derivatives in one of a few template spaces, including "MNI152NLin6Asym", "MNI152NLin2009cAsym", "MNIInfant", and "fsLR". We may add support for additional templates in the future, but currently you must have at least one of these among your output spaces. XCP-D does not have any specific requirements for resolution of volumetric derivatives, but we do require fsLR-space CIFTIs be outputted in 91k density.

Troubleshooting

Logs and crashfiles are outputted into the <output dir>/xcp_d/sub-<participant_label>/log directory. Information on how to customize and understand these files can be found on the nipype debugging page.

Support and communication

All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/PennLINC/xcp_d/issues.

If you have a question about using XCP-D, please create a new topic on NeuroStars with the "Software Support" category and the "xcp_d" tag. The XCP-D developers follow NeuroStars, and will be able to answer your question there.