Solving the problems of integrating cues and processing speech in time
McMurray, B., & Toscano, J. C. (2012, October). Paper presented at the 164th Meeting of the Acoustical Society of America, Kansas City, MO.
Abstract: Work on language comprehension is classically divided into two fields. Speech perception asks how listeners cope with variability from factors like talker and coarticulation to compute some phoneme-like unit; and word recognition assumed these units to ask how listeners cope with time and match the input to the lexicon. Evidence that within-category detail affects lexical activation (Andruski, et al., 1994; McMurray, et al., 2002) challenges this view: variability in the input is not “handled” by lower-level processes and instead survives until late in processing. However, the consequences of this have not been fleshed out. This talk begins to explore them using evidence from the eye-tracking paradigms. First, I show how lexical activation/competition processes can help cope with perceptual problems, by integrating acoustic cues that are strung out over time. Next, I examine a fundamental issue in word recognition, temporal order (e.g., distinguishing cat and tack). I present evidence that listeners represent words with little inherent order information, and raise the possibility that fine-grained acoustic detail may serve as a proxy for this. Together these findings suggest that real-time lexical processes may help cope with perceptual ambiguity, and that fine-grained perceptual detail may help listeners cope with the problem of time.