A Phonologist’s View of the Past Tense Controversy – Bruce Hayes (Department of Linguistics, UCLA)

October 24, 2000 all-day

View Seminar Video
In 1988, Pinker and Prince published an extensive commentary on Rumelhart and McClelland’s connectionist learning simulation for English past tenses. The review initiated a competition between rival psycholinguistic accounts of morphology. Connectionist approaches have attempted to derive rule-like behavior without rules, using the ability of their networks to analogize to existing lexical entries. The “dual mechanism” approach supposes that regular cases are derived by a single, very simple “default” rule, and that other forms are formed analogically.
The debate seems inconclusive to me. Connectionists have never taken on the most important of Pinker and Prince’s criticisms: that people can apply rules to novel forms that are dissimilar to any existing stem, and therefore lack a basis for analogy (e.g. ploamph –> ploamphed). Dual mechanism theorists have trivialized what a rule can be, limiting themselves to the plainest of examples, and they have yet to submit their ideas to the most serious test, namely implementation as a computational model.
I suggest that (a) there really are rules, and there must be some way to learn them; (b) learning cannot take place with a snap of the fingers, but involves considerable computation–this is how people can learn morphological rules that are nontrivial. A good candidate for how this could be done is given in a largely-forgotten section of Pinker and Prince (1988): they propose an algorithm that generalizes rules bottom-up from the lexicon. The algorithm creates rules in great numbers, quantitatively evaluating their effectiveness against the learning set. Over the past few years, Adam Albright and I have tested this approach by making it the basis of a computational learner. We have fed morphological paradigms to the learner and used the grammars it creates to model the behavior of people in Wug-testing experiments. In this talk I will describe some of our results.

tCandidate rules are evaluated by learners on the basis of a “batting average” (hits/scope), adjusted downward for rules that apply to relatively few forms.
tThe basic Pinker/Prince algorithm should be modified to avoid a “learning trap,” which arises when an affix allomorph is regular in one context (jump[t]) but irregular in another (dwell[t]).
tThe algorithm learns a grammar that is sufficiently rich that many cases it covers are thought by other participants in the debate to be the result of analogy; raising the possibility that the rule/analogy boundary falls further into the realm of phonological detail than has previously been assumed.
tOur algorithm learns “islands of reliability” for the default mapping (example: all English [XIp] verbs are regular). Under one interpretation of the dual mechanism model, such islands should not exist, but tentative evidence from experiments on Italian, Spanish, and English suggests that they do.

By allowing rules of unlimited generality to be learned, our approach overcomes the “ploamph” problem faced by purely-analogical approaches. By processing the data in great detail, it learns the full richness of morphological and phonological patterning.
* Research done in collaboration with Adam Albright of UCLA.

Center for Language and Speech Processing