This pipeline is developed by the Poldrack lab at Stanford University for use at the Center for Reproducible Neuroscience (CRN), as well as for open-source software distribution.
fMRIPrep is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that is designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols and that requires minimal user input, while providing easily interpretable and comprehensive error and output reporting. It performs basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skullstripping etc.) providing outputs that can be easily submitted to a variety of group level analyses, including task-based or resting-state fMRI, graph theory measures, surface or volume-based statistics, etc.
Note
fMRIPrep performs minimal preprocessing. Here we define 'minimal preprocessing' as motion correction, field unwarping, normalization, bias field correction, and brain extraction. See the workflows section of our documentation for more details.
The fMRIPrep pipeline uses a combination of tools from well-known software packages, including FSL_, ANTs_, FreeSurfer_ and AFNI_. This pipeline was designed to provide the best software implementation for each state of preprocessing, and will be updated as newer and better neuroimaging software become available.
This tool allows you to easily do the following:
- Take fMRI data from raw to fully preprocessed form.
- Implement tools from different software packages.
- Achieve optimal data processing quality by using the best tools available.
- Generate preprocessing quality reports, with which the user can easily identify outliers.
- Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors.
- Automate and parallelize processing steps, which provides a significant speed-up from typical linear, manual processing.
More information and documentation can be found at https://fmriprep.readthedocs.io/
fMRIPrep is built around three principles:
- Robustness - The pipeline adapts the preprocessing steps depending on the input dataset and should provide results as good as possible independently of scanner make, scanning parameters or presence of additional correction scans (such as fieldmaps).
- Ease of use - Thanks to dependence on the BIDS standard, manual parameter input is reduced to a minimum, allowing the pipeline to run in an automatic fashion.
- "Glass box" philosophy - Automation should not mean that one should not visually inspect the results or understand the methods. Thus, fMRIPrep provides visual reports for each subject, detailing the accuracy of the most important processing steps. This, combined with the documentation, can help researchers to understand the process and decide which subjects should be kept for the group level analysis.
- Very narrow :abbr:`FoV (field-of-view)` images oftentimes do not contain enough information for standard image registration methods to work correctly. Also, problems may arise when extracting the brain from these data. Supporting these particular images is already a future line of the development road-map.
- fMRIPrep may also underperform for particular populations (e.g., infants) and non-human brains, although appropriate templates can be provided to overcome the issue.
- The "EPInorm" approach is currently not supported, although we plan to implement this feature (see #620).
- If you really want unlimited flexibility (which is obviously a double-edged sword).
- If you want students to suffer through implementing each step for didactic purposes, or to learn shell-scripting or Python along the way.
- If you are trying to reproduce some in-house lab pipeline.
(Reasons 4-6 were kindly provided by S. Nastase in his open review of our pre-print).
Please acknowledge this work using the citation boilerplate that fMRIPrep includes in the visual report generated for every subject processed. For an illustration of how the citation boilerplate generally reads, please check our documentation.