A computational approach to early language bootstrapping – Emmanuel Dupoux (Ecole Normale Supérieure)

When:
December 2, 2011 all-day
2011-12-02T00:00:00-05:00
2011-12-03T00:00:00-05:00

View Seminar Video
Abstract
Human infants learn spontaneously and effortlessly the language(s) spoken in their environments, despite the extraordinary complexity of the task. In the past 30 years, tremendous progress has been made regarding the empirical investigation of the linguistic achievements of infants during their first two years of life. In that short period of their life, infants learn in an essentially unsupervised fashion the basic building blocks of the phonetics, phonology, lexical and syntactic organization of their native language (see Jusczyk, 1987). Yet, little is known about the mechanisms responsible for such acquisitions. Do infants rely on general statistical inference principles? Do they rely on specialized algorithms devoted to language?Here, I will present an overview of the early phases of language acquisition and focus on one area where a modeling approach is currently being conducted, using tools of signal processing and automatic speech recognition: the unsupervised acquisition of phonetic categories. It is known that during the first year of life, before they are able to talk, infants construct a detailed representation of the phonemes of their native language and loose the ability to distinguish nonnative phonemic contrasts (Werker & Tees, 1984). It will be shown that the only mechanism that has been proposed so far, that is, unsupervised statistical clustering (Maye, Werker and Gerken, 2002), may not converge on the inventory of phonemes, but rather on contextual allophonic units that are smaller than the phoneme (Varadarajan, 2008). Alternative algorithms will be presented using three sources of information: the statistical distribution of their contexts, the phonetic plausibility of the grouping, and the existence of lexical minimal pairs (Peperkamp et al., 2006; Martin et al, submitted). It is shown that each of the three sources of information can be acquired without presupposing the others, but that they need to be combined to arrive at good performance. Modeling results and experiments in human infants will be presented.The more general proposal is that early language bootrapping may not rely on learning principles necessarily specific to language. What is presumably unique to language though, is the way in which these principles are combined in a particular ways to optimize the emergence of linguistic categories after only a few months of unsupervized exposure to speech signals.Jusczyk, P. (1997). The discovery of spoken language. Cambridge, MA: MIT Press.Martin, A., Peperkamp, S., & Dupoux, E. (submitted). Learning phonemes with a pseudo-lexicon.Maye, J., Werker, J., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82, B101-B111.Peperkamp, S., Le Calvez, R., Nadal, J.P. and Dupoux, E. (2006). The acquisition of allophonic rules: statistical learning with linguistic constraints. Cognition, 101, B31-B41Varadarajan, B., Khudanpur, S. & Dupoux, E. (2008). Unsupervised Learning of Acoustic Subword Units, in Proceedings of ACL-08: HLT, 165-168.Werker, J.F., & Tees, R.C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49-63.

Center for Language and Speech Processing