The Translation Barrier Hypothesis: Multilingual Generation with Large Language Models Suffers from Implicit Translation Failure – Niyati Bafna (JHU)

When:
November 3, 2025 @ 12:00 pm – 1:15 pm
2025-11-03T12:00:00-05:00
2025-11-03T13:15:00-05:00
Where:
Hackerman Hall B17
Cost:
Free

Abstract

Multilingual generation with large language models (LLMs) is often of poor quality for mid- to low-resource languages, but the causes for this are not well-understood. We first demonstrate the existence of an implicit task-solving–> translation pipeline for generation, whereby the model first solves the required task in a largely target-language-agnostic manner, and subsequently translates answer concepts into the intended target language. We hypothesize that the failure of the translation stage, despite task-solving success, is an important culprit for the observed low quality of final outputs, and formalize this as the translation barrier hypothesis. We quantify the extent to which either stage in the pipeline is responsible for final failure for a word translation task across  language pairs, and find that the translation barrier explains a dominant portion of error for a majority of language pairs, and is especially severe for low-resource target languages. Our results highlight an important bottleneck for end-to-end multilingual generation, relevant for future work seeking to improve multilinguality in LLMs.

Bio

I’m a third year PhD student advised by David Yarowsky, working in multilinguality and low-resource NLP.

Center for Language and Speech Processing