A Bayesian view of inductive learning in humans and machines – Josh Tenenbaum (MIT)

When:
February 17, 2004 all-day
2004-02-17T00:00:00-05:00
2004-02-18T00:00:00-05:00

View Seminar Video
Abstract
In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words or the existence of hidden biological properties or causal relations from just one or a few relevant observations — far outstripping the capabilities of conventional learning machines. How do they do it? I will argue that the success of peoples everyday inductive leaps can be understood as the product of domain-general rational Bayesian inferences constrained by peoples implicit theories of the structure of specific domains. This talk will explore the interactions between peoples domain theories and their everyday inductive leaps in several different task domains, such as generalizing biological properties and learning word meanings. I will illustrate how domain theories generate the hypothesis spaces necessary for Bayesian generalization, and how these theories may themselves be acquired as the products of higher-order statistical inferences. I will also show how our approach to modeling human learning motivates new machine learning techniques for semi-supervised learning: generalizing from very few labeled examples with the aid of a large sample of unlabeled data.

Center for Language and Speech Processing