Abstract
Modern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However, by and large, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this talk, I will give a few examples of exploring alternative architectures to tackle those challenges. In particular, we can improve the performance of such (language) models by representing, storing and accessing knowledge in a dedicated memory component.
This talk is based on several joint works with Yury Zemlyanskiy (Google Research), Michiel de Jong (USC and Google Research), William Cohen (Google Research and CMU) and our other collaborators in Google Research.
Biography
Fei is a research scientist at Google Research. Before that, he was a Professor of Computer Science at University of Southern California. His primary research interests are machine learning and its application to various AI problems: speech and language processing, computer vision, robotics and recently weather forecast and climate modeling. He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing, China).
Abstract
Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping). Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09. In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates. In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions. Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so. The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.
Biography
Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign. He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.