Student Seminar – Neha Verma “Exploring Geometric Representational Disparities Between Multilingual and Bilingual Translation Models”
3400 N. Charles Street
Baltimore
MD 21218
Abstract
Multilingual machine translation has proven immensely useful for both parameter efficiency and overall performance for many language pairs via complete parameter sharing. However, some language pairs in multilingual models can see worse performance than in bilingual models, especially in the one-to-many translation setting. Motivated by their empirical differences, we examine the geometric differences in representations from bilingual models versus those from one-to-many multilingual models. Specifically, we measure the isotropy of these representations using intrinsic dimensionality and IsoScore, in order to measure how these representations utilize the dimensions in their underlying vector space. We find that for a given language pair, its multilingual model decoder representations are consistently less isotropic than comparable bilingual model decoder representations. Additionally, we show that much of this anisotropy in multilingual decoder representations can be attributed to modeling language-specific information, therefore limiting remaining representational capacity.