Skip to content
Kendrick Kay edited this page Apr 30, 2022 · 13 revisions

History

  • 2017/02/19 - Initial release. Contents are complete.

Introduction

This github repository provides data and modeling code corresponding to the following paper:

Kay, K.N. & Yeatman, J.D. Bottom-up and top-down computations in word- and face-selective cortex. eLife (2017). http://dx.doi.org/10.7554/eLife.22341

If you use these materials in your research, please cite the above paper. You might also be interested in other available datasets and modeling code -- see this, this, and this. If you have any questions, contact Kendrick.

Basic description

There are two fMRI experiments. The first experiment is the main experiment which involved 9 subjects, a range of stimuli (22 stimuli + 1 blank = 23 conditions), and three different tasks (fixation task, categorization task, and one-back task). The second experiment is an additional experiment which involved 1 subject, a wider range of stimuli (114 stimuli + 1 blank = 115 conditions), and a single task (fixation task). In Experiment 1, each stimuli corresponds to 10 distinct images (e.g. 10 different faces corresponding to the FACE stimulus), whereas in Experiment 2, each stimulus corresponds to a single image.

The data have been pre-processed, including slice time correction, motion correction, and spatial undistortion based on fieldmap measurements. Additionally, the data have been analyzed and denoised using the GLM implemented in GLMdenoise. Finally, beta weights produced by the GLM have been averaged across each region of interest (ROI) (defined using independent localizer data). The resulting ROI-averaged beta weights (in units of percent signal change) are provided in this repository.

Code dependencies

In order to use the modeling code, it is assumed that the knkutils repository is available on the MATLAB path.

Files

  • README.md - A short text document referring the reader to this wiki.

  • LICENSE - The license that governs this content.

  • experimentN.mat - The stimuli and data from Experiment N.

  • experimentN.gif - A sample movie showing what Experiment N actually looked like.

  • experimentN.m4v - A sample movie showing what Experiment N actually looked like.

  • experimentNstimuli - A folder of image (.png) files showing the stimuli from Experiment N.

  • experimentNthumbnails.png - A single image showing thumbnails of all stimuli. Note that for Experiment 1, this shows just the first of the ten images corresponding to each stimulus.

  • model_XXX.m - A standalone script that implements the XXX model. XXX is either 'template' (Template model), 'ipsscaling' (IPS-scaling model), 'driftdiffusionpart1' (Drift diffusion model prediction of reaction times), or 'driftdiffusionpart2' (Drift diffusion model prediction of IPS responses).

  • model_XXX.mat - The results after running the model_XXX.m script (excluding the 'a1' variable).

  • html - This directory contains HTML output illustrating the results of the model scripts, generated through the MATLAB 'publish' command. We provide convenient links so you can view these in your web browser:

Contents of experiment1.mat

  • 'roilabels' indicates ROI names (1 x 8).

  • 'stimlabels' indicates stimulus names (1 x 23).

  • 'stimuli' is a uint8 matrix with the stimuli (500 pixels x 500 pixels x 10 images x 23 stimuli). The values should be interpreted as linearly related to luminance values (no gamma adjustment required). 240 pixels corresponds to 2 degrees of visual angle.

  • 'subjectbeta' contains single-subject beta weights (9 subjects x 8 ROIs x 23 stimuli x 3 tasks).

  • 'subjectbetase' is the measurement error on 'subjectbeta' (1/2 of the 68% range of bootstraps).

  • 'subjectbetaboot' contains bootstrapped single-subject beta weights (9 subjects x 8 ROIs x 23 stimuli x 3 tasks x 100 bootstraps). Each bootstrap is the result of resampling runs with replacement.

  • 'groupbeta' contains group-averaged beta weights (8 ROIs x 23 stimuli x 3 tasks). A normalization procedure was used to discount differences in overall gain across subjects.

  • 'groupbetase' is the measurement error on 'groupbeta' (1/2 of the 68% range of bootstraps).

  • 'groupbetaboot' contains bootstrapped group-averaged beta weights (8 ROIs x 23 stimuli x 3 tasks x 100 bootstraps). Each bootstrap is the result of resampling subjects with replacement.

  • 'subjectrt' contains single-subject reaction times (RTs) for the categorization task (9 subjects x 23 stimuli). These reflect the median across bootstraps of the median RT across trials).

  • 'subjectrtse' is the measurement error on 'subjectrt' (1/2 of the 68% range of bootstraps).

  • 'grouprt' contains group-averaged RTs for the categorization task (1 x 23 stimuli). A normalization procedure was used to discount additive differences in RT across subjects.

  • 'grouprtse' is the measurement error on 'grouprt' (standard error).

  • 'groupcategoryjudgment' indicates the categorization decision ("word", "face", or "other") for each stimulus (1 x 23 stimuli).

  • 'groupcategorypercentage' is the percentage of trials (averaged across subjects) on which a given stimulus was deemed to be the category listed in 'groupcategoryjudgment'.

Contents of experiment2.mat

The description provided above for experiment1.mat also applies to experiment2.mat, with the following exceptions: there are two ROIs (VWFA, FFA), there is a different number of stimuli (115), there is only 1 image per stimulus, the stimuli have a resolution of 750 pixels x 750 pixels (240 pixels corresponds to 2 degrees of visual angle), there is only 1 subject, there is only 1 task, and there are no group-level data nor behavioral data.

Notes

  • There is a slight discrepancy in how 'groupbetase' and 'groupbetaboot' were computed. The 'groupbetase' variable is calculated after normalizing each subject's beta weights to unit length; thus, this discounts differences in overall gain across subjects. On the other hand, the 'groupbetaboot' variable reflects independent bootstrap resampling of subjects; thus, each bootstrap is subject to variability in overall gain, and the variability across bootstraps is somewhat higher than the variability indicated by 'groupbetase'.

  • In each experiment, the first stimulus is a blank stimulus. Thus, some of the values (e.g. category judgement) associated with the first stimulus are NaN.

  • The pixel values in the movies and .png files depicting the stimuli have been square-rooted so that displaying these files on a typical monitor (with a gamma of ~2.0) will be accurate. For accuracy, use the stimuli as stored in the .mat files.

  • Subjects 1-6 had partial brain coverage; only a portion of the IPS was recorded in these subjects. Subjects 7-9 had full brain coverage.

  • Regarding 'stimlabels', note the following: Face at 100% contrast is 'FACE'. Word at 100% contrast is 'WORD'. Noise at 100% contrast is 'NOISE'. Face at 0% phase coherence is the same as 'NOISE'. Face at 100% phase coherence is the same as 'FACE'. Word at 100% phase coherence is the same as 'WORD'.

Terms of use

The content provided here is licensed under a Creative Commons Attribution 3.0 Unported License (http://creativecommons.org/licenses/by/3.0/). You are free to share and adapt the content as you please, under the condition that you cite the manuscript described above.