Toscano, J. C., & McMurray, B. (2012, October). Poster presented at the 2012 Neurobiology of Language Conference, San Sebastian, Spain.
Abstract: Many models of speech perception posit that listeners perceive speech sounds categorically (i.e., that the units of speech perception are phoneme categories), and behavioral and electrophysiological evidence has supported this. However, previous results may reflect responses that include both initial encoding of the stimulus and categorization. Thus, it is unclear whether early processing is based on continuous acoustic features or categorical phonological features. Recently, we presented an ERP approach for separating effects of perceptual encoding from later categorization responses (Toscano, McMurray, Dennhardt, & Luck, 2010, Psychological Science). We measured the auditory N1 and P3 components in response to speech sounds varying in voice-onset time (VOT) and found that the N1 reflects the acoustic properties of the stimulus (VOT differences) rather than discrete categories (/b/ vs. /p/). The later-occurring P3 component, in contrast, reflects both acoustic and category-level differences. Here, we extend these results to see whether these components serve as an index of encoding and categorization for other cues and phonological contrasts. We found that effects of continuous acoustic differences on N1 amplitude can be observed for some distinctions but are difficult to observe for others. Differences in P3 amplitude reflecting both acoustic and phonological information were observed for a variety of stimulus types. Overall, the results suggest that this approach allows us to separate effects of encoding and categorization for certain perceptually-relevant speech distinctions. More importantly, in contrast to many classic models, they suggest that speech perception is based on differences in continuous acoustic cues rather than discrete categories.