**Abstract**

While the “deep learning t sunami” continues to define the state of the art in speech and language pr ocessing\, finite-state transducer grammars developed by linguists and eng ineers are still widely used in industrial\, highly-multilingual settings\ , particularly for symbolic\, “front-end” speech applications. In this tal k\, I will first briefly review the current state of the OpenFst and OpenG rm finite-state transducer libraries. I then review two “late-breaking” al gorithms found in these libraries. The first is a heuristic but highly-eff ective general-purpose optimization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-det erministic weighted acceptors which lack certain properties required by cl assic shortest-path algorithms. I will then illustrate how the OpenGrm too ls can be used to induce a finite-state string-to-string transduction mode l known as a pair n-gram model. This model has been applied to grapheme-to -phoneme conversion\, loanword detection\, abbreviation expansion\, and ba ck-transliteration\, among other tasks.

\n**Biography**

Kyle Gorman is an assistant professor of linguistics at the Gradu
ate Center\, City University of New York\, and director of the master’s pr
ogram in computational linguistics\; he is also a software engineer in the
speech and language algorithms group at Google. With Richard Sproat\, he
is the coauthor of *Finite-State Text Processing* (Morgan & Claypool\
, 2021) and the creator of Pynini\, a finite-state text processing library
for Python. He has also published on statistical methods for comparing co
mputational models\, text normalization\, grapheme-to-phoneme conversion\,
and morphological analysis\, as well as many topics in linguistic theory.