A Bayesian view of inductive learning in humans and machines – Josh Tenenbaum (MIT)

February 17, 2004 all-day

View Seminar Video
In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words or the existence of hidden biological properties or causal relations from just one or a few relevant observations — far outstripping the capabilities of conventional learning machines. How do they do it? I will argue that the success of peoples everyday inductive leaps can be understood as the product of domain-general rational Bayesian inferences constrained by peoples implicit theories of the structure of specific domains. This talk will explore the interactions between peoples domain theories and their everyday inductive leaps in several different task domains, such as generalizing biological properties and learning word meanings. I will illustrate how domain theories generate the hypothesis spaces necessary for Bayesian generalization, and how these theories may themselves be acquired as the products of higher-order statistical inferences. I will also show how our approach to modeling human learning motivates new machine learning techniques for semi-supervised learning: generalizing from very few labeled examples with the aid of a large sample of unlabeled data.

Johns Hopkins University

Johns Hopkins University, Whiting School of Engineering

Center for Language and Speech Processing
Hackerman 226
3400 North Charles Street, Baltimore, MD 21218-2680

Center for Language and Speech Processing