Xian Li (Facebook) “Build Multilingual Models Efficiently and Responsibly”- Virtual Visit
3400 N. Charles Street
Multilingual models is an active area of research in language understanding and generation. Not only does it contain interesting questions in academic research, such as language universality, it is also an appealing solution in practice by reducing training and deployment costs associated with individual language-specific models. Recent advancement in multilingual models has a primary focus on scaling by pushing the limits of model and data size. This has led to improved task performance (e.g. translation quality) especially for low resource languages. In this talk, using multilingual machine translation as an example task, I will discuss several research challenges in multilingual NLP which are not addressed by scaling. The central question is how to build multilingual models efficiently and responsibly. Insights from tackling those questions in return shed lights on how to scale smarter.
Xian Li is a research scientist at Facebook AI Research Lab (FAIR). Her research interests lie at the intersection of machine learning and natural language processing. Her work in machine translation, multilingual NLP, robustness, etc. has led to both research publications and applications in production.