Seminars

Jan
24
Fri
Mental Programs in Humans and Machines
Jan 24 @ 12:00 pm – 1:15 pm

Abstract

How do humans efficiently learn new rules, causal laws, and mental algorithms, and how could AI systems do the same? From the perspective of human behavior, I will present results suggesting that representing knowledge in natural language gives a better account of human learning patterns than representing knowledge in a bespoke formal language, but that LLMs on their own fail to describe human inductive reasoning: Instead, a rational analysis that takes a probabilistic modeling approach approach is necessary to explain human data. From the perspective of artificial intelligence, I give algorithms that augment LLMs with guardrails from probabilistic reasoning, which improves their ability to write code, design experiments, and formalize causal knowledge about how the world works, ultimately allowing agents that learn by programming a world model describing their past experiences, which gives sample-efficient solution of certain reinforcement learning problems. Last I compare symbolic programs against purely neural representations, including in-context learning and its extensions, finding that neither strictly dominates the other, and instead that they play complimentary roles in inductive reasoning.

Bio

Ellis is an Assistant Professor at Cornell University in Computer Science, having previously completed his PhD at MIT in Cognitive Science. His research studies the intersection of program synthesis, AI, and human cognition, and was previously recognized with an NSF CAREER Award, coverage by the New York Times, a selection as one of the best human behavior articles in Nature Communications its year, and a best paper award at this year’s ARCPrize contest.

Center for Language and Speech Processing