Skip to content

Latest commit

 

History

History
21 lines (17 loc) · 1.99 KB

wang2012.md

File metadata and controls

21 lines (17 loc) · 1.99 KB

Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

  • Authors: Changming Wang et al.
  • Journal: Journal of Neural Engineering, Volume 9
  • Publication Date: 2012

This paper proposes the use of single trial EEG to classify four different image categories: Cat, Building, Car, Faces. The experiments consisted of 8 subjects, and each subject is shown 40 images from each category. 10 sessions were conducted for each subject. Some experiment parameters:

  • Images were grayscaled and luminance was adjusted manually.
  • Each image presentation lasted for 500 ms followed by a blank screen with ISI randomly ranging from 850 to 1450 ms.
  • A fixation red cross was shown at center of screen. Color of red cross changed from red to blue periodically, and subject required to report number of blue crosses to confirm they are still engaged.
  • Signals filtered from 0.5-40 Hz.
  • Used a 64 channel Bain Products system, but only used 50 channels (discarded channels in the frontal area which are susceptible to eye blinks. Other findings in the paper: • They used 4 different event related potentials for feature extraction: P1, N1/N170, P2a and P2b, with intervals determined by visual inspection to be 40, 60, 70, and 80 ms respectively. • The feature was simply the time average of the time series for each trial. This resulted in a 50 dimensional vector (corresponding to 50 channels). • Used Fischer Linear Discriminant to classify features to their corresponding category. Classification results were based on leave one subset out, and they lumped two consecutive trials by each subject into one subset, resulting in a total of 5 fold validation. • The results are not impressive at all, with average accuracy across subjects was 53% for the N1 ERP interval. • They tried to combine multiple intervals and concatenate the feature vector, and this did enhance results, with best classification accuracy near 75%. For P1+N1+P2a+P2b.