AXOMs: Asynchronous Cascaded Self-organizing Maps for Language Learning – Vito Pirrelli (CNR)
View Seminar Video
AXOMs are hierarchically-arranged Self-organizing Maps (SOMs) in an asynchronous feed-forward relation. In AXOMs, an incoming input word is sampled on a short time scale, and recoded through the topological activation state of a first-level SOM, called the phonotactic layer, placed at the bottom of the hierarchy. The activation state is eventually projected upwards to the second-level map in the hierarchy (or lexical layer) on a longer time scale. In the talk, we shall provide the formal underpinnings of AXOMs, together with a concrete illustration of their behaviour through two language learning sessions, simulating the acquisition of Italian and English verb forms respectively. The architecture is capable of mimicking two levels of long-term memory chunking: low-level segmentation of phonotactic patterns and higher-level morphemic chunking, together with their feeding relation. It turns out that the topology of second-level maps mirrors a meta-paradigmatic organization of the inflection lexicon, clustering verb paradigms sharing the same conjugation class, based on the principle of formal contrast. Examples of Vito’s recent work are available at available here. These papers may be of particular interest: Calderone, B., I. Herreros, V. Pirrelli, 2007, Learning Inflection: the importance of starting big, Lingue e Linguaggio, vol. 2 Pirrelli, Vito, and Ivan Herreros (2007) Learning Morphology by Itself, in Proceedings of the Fifth Mediterranean Morphology Meeting.
Vito Pirrelli received a laurea degree in the Humanities from the Linguistics Department of Pisa University (Italy) and a PhD in Computational Linguistics from Salford University (UK) with a dissertation in “Morphology, Analogy and Machine Translation”. Currently he is Research Director at the CNR Institute for Computational Linguistics in Pisa and teaches “Computer for Humanities” at the Department of Linguistics of Pavia University. Author of two books and several journal and conference articles in Computational and Theoretical Linguistics, his main research interests include: Machine language learning Computer models of the mental lexicon Psycho-computational models of morphology learning and processing Hybrid models of language processing Information extraction Theoretical Morphology.