About ERP Pattern Analysis

From Nemo

Jump to: navigation, search


Averaged EEG, or “event-related potentials” (ERP) data, consist of overlapping spatial and temporal patterns (or “components” in the traditional nomenclature). Researchers desire to isolate these patterns and relate them to specific neurocognitive functions. Unfortunately, conventional methods for component analysis are ad hoc and vary widely across labs, making it difficult to generalize results across studies. The goal of our ERP pattern analysis project is to develop and test methods for systematic pattern analysis. If these methods are effective, they will facilitate identification and integration of ERP patterns across different laboratories and experimental tasks (cognitive-behavioral “paradigms”).


Our ERP pattern analysis tasks fall into three broad categories: (a) Design & implementation of methods for pattern separation; (b) Methods for evaluation of pattern separation, and (c) Labeling of patterns based on spatial, temporal (and/or spectral) metrics that are extracted from patterns in individual subject ERPs. Accurate pattern analysis and classification is a self-standing goal. It can also be viewed as a first step in the development of our ERP ontologies, as described in our NEMO project proposal.

Methods for pattern separation
Methods for ERP separation are of two general types: signal decomposition (e.g., PCA, ICA) and windowing or segmentation (e.g., “microstate analysis”). Signal decomposition methods attempt to separate patterns through spatial and/or temporal decorrelation (PCA) or higher-order statistical analysis of signal distributions (ICA). The big advantage of the factor analytic approach is that, if successful, it can be used to address spatial and temporal superposition of ERP patterns. The major challenge in implementation and evaluation of these methods is determining whether source pattern variance was correctly “allocated” (or assigned to) the patterns (“factors”) that are extracted from the original ERPs. To avoid misallocation of variance, it is important to retain the right number of factors and to use an effective method for rotation of factors to achieve correct variable-to-factor assignment. An alternative to signal decomposition is windowing, or segmentation, of ERP data. In traditional ERP analysis, researchers often select temporal windows that define ERP patterns of interest based on a priori hypotheses, post-hoc inspection of the data, or arbitrary segmentation of the entire epoch into equal intervals (e.g., 100ms time windows). There are also methods for ERP windowing that are more data-driven. The data can be segmented or parsed into windows that correspond to distinct brain states (or “microstates”) using shifts in global field power (GFP) or shifts in topography (“global field dissimilarity” or GFD). These two approaches to ERP segmentation have been shown to give similar results (Koenig, 1996). We assume that decomposition and segmentation methods will give some convergent and some discrepant ERP pattern analysis results, leading to different “views” of ERP patterns and thus different ontologies.
Evaluation of pattern analysis methods
We envision extending APECS to include metrics for evaluation of ERP pattern analysis methods. The appropriate metrics may be different for spatial decomposition (e.g., sICA), temporal decomposition (e.g., tPCA), and microstate segmentation.
Classification and labeling of ERP patterns
ERP patterns are conventionally defined along three dimensions: space (scalp topography), time (peak latency and sometimes duration), and function (sensitivity to experimental stimuli and task parameters). However, methods for defining and measuring ERP patterns have been ad hoc and variable across different labs. We are aiming to develop rigorous methods for pattern identification and labeling that are data-driven, but also generalize to different task contexts, to allow for comparison and integration of ERP results across different experiments. We have adopted two approaches to this problem. First, we have used expert knowledge, based on ERP analyses results from published papers, to define and operationalize ERP patterns that are commonly seen in particular cognitive paradigms (e.g., visual word recognition). We then formulate explicit pattern rules to capture this prior knowledge. We refer to this approach as "top-down." In a second, complementary, approach, we begin with the data, extract ERP patterns ("components") using one of several pattern analysis methods (see above), and summarize the characteristics of each pattern using a set of spatial, temporal, and functional metrics. These metrics are then input to data mining (clustering & classification) methods to "discover" pattern rules (bottom-up). The integration of top-down and bottom-up methods required some tuning, including the use of complementary pattern analysis techniques (sICA + tPCA), to provide spatial and temporal data that could be mined for rule refinement. Ideally, this approach would involve many iterations, revising top-down rules based on data-specific outcomes. Our current approach reverses these steps. We begin by labeling data in a data-driven (bottom-up) fashion, an approach made possible by the recent implementation of positive and negative “centroid” metrics and systematic (revised) mapping of EGI channels to the International 10-10 system and to corresponding “regions of interest” (ROI). Now that we have a more accurate way to characterize the spatial distribution of patterns (including the positive and negative fields), we can automatically label patterns based on peak time (rounded to the nearest 50ms) and peak positive and negative centroid. Hence, patterns can be automatically extracted and labeled without any top-down intervention. Once the data are labeled in this manner, the challenge is to integrate or cluster individual patterns observed in different ERP experiments to yield higher-level pattern definitions that are more general (top-down) and can thus validly be applied to new datasets.


/wiki/images/1/17/Fish1.png /wiki/images/e/ea/Fish2.png /wiki/images/f/fa/Fish3.png /wiki/images/f/ff/Fish4.png /wiki/images/4/40/Fish5.png /wiki/images/c/c5/Fish6.png
Personal tools