The Hedgehog and the Fox: Language Technology and the Knowledge of Language
Fernando Pereira, WhizBang! Labs -- Research
October 17, 2000
"The fox knows many things, but the hedgehog knows one big thing."
Statistical and machine-learning methods have allowed us to create classifiers, taggers and information extractors that can answer predetermined questions about linguistic material with surprising accuracy. However, we have the strong intuition that language "understanding" requires something else, the ability to answer accurately a wide range of questions pertaining to any input. What is the relationship between single-question learners and broader understanding? Information-theoretically, we may characterize language processing tasks by the entropy of their output absent any information about the input, and thus draw a continuum between, say, binary text classification and machine translation.
Linguistic representations can also be understood as codified answers to particular kinds of questions pertaining to linguistic material, with their own degrees of information-theoretic difficulty. From this point of view, the task of the learner is to acquire an accurate procedure for deciding whether a simple sentence follows from a discourse, rather than the more traditional tasks of deciding grammaticality or assigning structural descriptions. Structural descriptions would still play an important role in such a theory, but now as proxies for informational relationships between external linguistic events rather than claims on mental representation.
Can hedgehogs evolve into foxes?
Fernando Pereira joined WhizBang! Labs as a distinguished research scientist in April 2000. He received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982, and has held a variety of research and management positions, first at SRI International and later at AT&T Labs, where he led the machine learning and information retrieval research department from September 1995 to April 2000, as well as adjunct faculty positions at Stanford University, the University of Pennsylvania, and Carnegie-Mellon University. His main research areas are computational linguistics and machine learning, and he is a main contributor to several advances in finite-state models for speech and text processing in everyday industrial use. He has 72 research publications on computational linguistics, speech recognition, machine learning and logic programming. He was elected Fellow of the American Association for Artificial Intelligence in 1991 for his contributions to computational linguistics and logic programming.