Dennhardt, J., McMurray, B., Luck, S. J., and Toscano, J. C. (2006, June). Poster presented at the 151st Meeting of the Acoustical Society of America, Providence, RI.
Dennhardt, J., McMurray, B., Luck, S. J., and Toscano, J. C. (2006, June). Poster presented at the 151st Meeting of the Acoustical Society of America, Providence, RI.
By building computer models of the speech system and running simulations with them, we are able to better understand how phonetic categories are learned and how listeners process speech sounds during word recognition. I use several types of models to study speech perception, including neural network and statistical models. Two of the specific problems I am working on are unsupervised learning and cue weighting... Read more →
A classic question in speech perception concerns whether listeners are sensitive to the continuous acoustic features in the speech signal independently of phonological information. Recent work has shown that listeners can perceive within-category acoustic differences at the level of lexical representations. However, these responses also show effects of phonological categories. Thus, it is unclear whether there is an earlier stage of processing that is not influenced by category information... Read more →
One way for listeners to cope with variability in the speech signal is to use multiple acoustic cues when identifying speech sounds. Multiple cues often contribute to a single phonetic distinction in speech, and listeners can combine different sources of acoustic information to help resolve ambiguity. For example, one of the primary acoustic cues to the voicing distinction in English, the difference between the sounds ‘b’ and ‘p’, is voice-onset time (VOT)... Read more →