Skip to content

Latest commit

 

History

History
130 lines (70 loc) · 9.01 KB

README.md

File metadata and controls

130 lines (70 loc) · 9.01 KB

rt-me-fMRI

Download the dataset from DataverseNL Explore the data interactively Read this work Read this work Reproduce figures Reproduce figures Reproduce results

Overview

This repository contains descriptions, code and data related to the real-time multi-echo functional magnetic resonance imaging (rt-me-fMRI) project conducted at the Electrical Engineering department of the Eindhoven University of Technology. Project outputs include:

  1. A dataset and related publication:

    rt-me-fMRI: A task and resting state dataset for real-time, multi-echo fMRI methods development and validation

  2. A methods publication:

    The effects of multi-echo fMRI combination and rapid T2*-mapping on offline and real-time BOLD sensitivity

Below we provide more information and instructions regarding:

Data summary

The rt-me-fMRI dataset is a multi-echo fMRI dataset (N=28 healthy participants) with four task-based and two resting state runs that were collected, curated and made available to the research community. Its main purpose is to advance the development of methods for real-time multi-echo functional magnetic resonance imaging analysis with applications in real-time quality control, adaptive paradigms, and neurofeedback, although the variety of experimental task paradigms supports a multitude of use cases. Tasks include finger tapping, emotional face and shape matching, imagined finger tapping and imagined emotion processing. This figure summarises the collected data:

fig1

The full data description is available as an F1000 data article.

Several depictions of the data tree can be viewed here

Downloading the data

The rt-me-fMRI dataset is available for reuse for the purpose of scientific research or education in the field of functional magnetic resonance imaging. If you wish to use the data, you have to agree to the terms of a Data Use Agreement when downloading the data.

The dataset itself can be downloaded from DataverseNL via this link.

The dataset was collected, processed and shared in accordance with the European Union's General Data Protection Regulation (GDPR) as approved by Data Protection Officers at the research institution. These specific conditions aim for personal data privacy to be prioritised while adhering to FAIR data standards ("findable, accessible, interoperable, reusable"). Procedures included de-identifying brain images (e.g. removing personally identifiable information from image filenames and metadata and removing facial features from T1-weighted images), converting the data to BIDS format, employing a Data Use Agreement, and keeping participants fully informed about each of these steps and the associated risks and benefits.

Much of the work that went into this administrative process has been documented as part of the output of the Open Brain Consent Working Group, accessible here.

Exploring the data

To explore the dataset's derivative measures interactively, visit this web application. It was built with Python using the Plotly Dash framework. The open source code base is available at this repository.

Reproducibility: data preparation

The data preparation process is documented here. This includes code to convert neuroimaging, physiological and other data to BIDS format.

Reproducibility: results

After preprocessing and quality checking of the data (see a full description in the data article) the data were processed and analysed as described in the methods article. Because of data storage limitations, these derivative data are not shared together with the rt-me-fMRI dataset. However, code and instructions are provided to allow these derivative data to be reproduced. Additionally, code and instructions are provided to subsequently generate the summary data from which the results of the methods paper as well as the data underlying the Dash application are derived:

Reproducibility: figures

The following notebooks contain code and descriptions that allows figures for the data and methods articles to be reproduced:

Software tools

All (pre)processing and major data analysis steps for both the data article and methods article were done using the open source MATLAB-based fMRwhy toolbox (v0.0.1; https://github.com/jsheunis/fMRwhy), which was developed over the course of this project. fMRwhy has has conditional dependencies:

Citing this work

Papers, book chapters, books, posters, oral presentations, and all other presentations of results derived from the rt-me-fMRI dataset should acknowledge the origin of the data as follows:

Data were provided (in part) by the Electrical Engineering Department, Eindhoven University of Technology, The Netherlands and Kempenhaeghe Epilepsy Center, Heeze, The Netherlands

In addition, please use the following citation when referring to the dataset:

Heunis S, Breeuwer M, Caballero-Gaudes C et al. rt-me-fMRI: a task and resting state dataset for real-time, multi-echo fMRI methods development and validation [version 1; peer review: 1 approved, 1 approved with reservations]. F1000Research 2021, 10:70 (https://doi.org/10.12688/f1000research.29988.1)

And the following citation when referring to the methods article:

Heunis, S., Breeuwer, M., Caballero-Gaudes, C., Hellrung, L., Huijbers, W., Jansen, J.F., Lamerichs, R., Zinger, S., Aldenkamp, A.P., 2020. The effects of multi-echo fMRI combination and rapid T2*-mapping on offline and real-time BOLD sensitivity. bioRxiv 2020.12.08.416768. https://doi.org/10.1101/2020.12.08.416768

Contributions / feedback

Feedback and future contributions are very welcome. If you have any comments, questions or suggestions about the dataset or derivative measures, please create an issue in this repository.