ERA Toolbox Wiki
The ERP Reliability Analysis (ERA) Toolbox is an open-source Matlab program that uses generalizability (G) theory to evaluate the reliability of ERP data. The purpose of the toolbox is to characterize the dependability (G-theory analog of reliability) of ERP scores to facilitate their calculation on a study-by-study basis and increase the reporting of these estimates.
The ERA Toolbox provides information about the minimum number of trials needed for dependable ERP scores and describes the overall dependability of ERP estimates. All information provided by the ERA Toolbox is stratified by group and condition to allow the user to directly compare dependability (e.g., a particular group may require more trials to achieve an acceptable level of dependability than another group).
Instructions for downloading the toolbox can be found here.
Why another toolbox?
Reliability is a property of scores (the data in hand), not a property of measures. This means that P3, error-related negativity (ERN), late positive potential (LPP), (insert your favorite ERP component here) is not reliable in some "universal" sense (Clayson and Miller, 2017b). Since reliability is context dependent, demonstrating the reliability of LPP scores in undergraduates at UCLA does not mean LPP scores recorded from children in New York can be assumed to be reliable. Measurement reliability needs to be demonstrated on a population-by-population, study-by-study, component-by-component basis (Clayson and Miller, 2017b; Hajcak, Meyer, and Kotov, 2017; Infantolino, Luking, Sauder, Curtin, and Hajcak, 2018).
The purpose of the ERA toolbox is to facilitate the calculation of dependability estimates to characterize observed ERP scores. ERP psychometric studies have been useful in suggesting cutoffs and characterizing the overall reliability of ERP components in those studies. When designing a study, that information can help guide decisions about, for example, the number of trials to present to a participant. However, if the observed data have more trials those recommended cutoffs, that does not mean that the data are necessarily reliable (Clayson and Miller, 2017b). ERP score reliability cannot be inferred from trial counts.
My hope is that the ERA Toolbox will make it easier to demonstrate the reliability of ERP scores on a study-by-study basis.
Mismeasurement of ERPs leads to misunderstood phenomena and mistaken conclusions. Poor ERP score reliability from mismeasurement compromises validity. Improving ERP measurement, by ensuring score reliability, can improve our trust of the inferences drawn from observed scores and the likelihood of our findings replicating.
The formulas implemented in the ERA Toolbox were developed by Scott A Baldwin. Information about the formulas can be found in Baldwin, Larson, and Clayson (2015) and Clayson and Miller (2017a). The ERA Toolbox can be cited using the Clayson and Miller paper.
Baldwin, S. A., Larson, M. J., & Clayson, P. E. (2015). The dependability of electrophysiological measurements of performance monitoring in a clinical sample: A generalizability and decision analysis of the ERN and Pe. Psychophysiology, 52, 790-800. doi: 10.1111/psyp.12401
Clayson, P. E., & Miller, G. A. (2017a). ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials. International Journal of Psychophysiology, 111, 68-79. doi: 10.1016/j.ijpsycho.2016.10.012
Copyright (C) 2016-2018 Peter E. Clayson
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.