Modeling “Bootstrapping” in Language Acquisition – Sharon Goldwater (University of Edinburgh)
View Seminar Video
Abstract
The term “bootstrapping” appears frequently in the literature on child language acquisition, but is often defined vaguely (if at all) and can mean different things to different people. In this talk, I define bootstrapping as the use of structured correspondences between different levels of linguistic structure as a way to aid learning, and discuss how probabilistic models can be used to investigate the nature of these correspondences and how they might help the child learner. I will discuss two specific examples, showing That using correspondences between acoustic and syntactic information can help with syntactic learning (“prosodic bootstrapping”). That using correspondences between syntactic and semantic information in a joint learning model can help with learning both syntax and semantics while also simulating important findings from the child language acquisition literature.
Biography
Sharon Goldwater is a Reader (US Associate Professor) in the Institute for Language, Cognition and Computation at the University of Edinburgh’s School of Informatics, and is currently a Visiting Associate Professor in the Department of Cognitive Science at Johns Hopkins University. She worked as a researcher in the Artificial Intelligence Laboratory at SRI International from 1998-2000 before starting her Ph.D. at Brown University, supervised by Mark Johnson. She completed her Ph.D. in 2006 and spent two years as a postdoctoral researcher at Stanford University before moving to Edinburgh. Her current research focuses on unsupervised learning for automatic natural language processing and computer modeling of language acquisition in children. She is particularly interested in Bayesian approaches to the induction of linguistic structure, ranging from phonemic categories to morphology and syntax.