Tracking Lexical Access in Continuous Speech – Michael Tanenhaus (University of Rochester)
View Seminar Video
All current models of spoken word recognition assume that as speech unfolds multiple lexical candidates become partially activated and compete for recognition. However, models differ on fundamental questions such as the nature of the competitor set, the temporal dynamics of word recognition, how fine-grained acoustic information is used in discriminating among potential candidates, and how acoustic input is combined with information from the context of the utterance. I.ll illustrate how each of these issues is informed by monitoring eye movements as participants follow instructions to use a computer mouse to click on and move pictures presented on a monitor. The timing and pattern of fixations allows for strong inferences about the activation of potential lexical competitors in continuous speech, while monitoring lexical access at the finest temporal grain to date, without interrupting the speech or requiring a meta-linguistic judgment. I.ll focus on recent work examining the effects on lexical access of fine-grained acoustic variation, such as coarticulatory information in vowels, within category differences in voice-onset time, interactions between acoustic and semantic constraints, and prosodic context.