Abstract
Zipf’s law is commonly glossed by the aphorism “infrequent words are frequent,” but in practice, it has often meant that there are three types of words: frequent, infrequent, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping). Hidden Markov models worked well for moderately infrequent words, but the problem of OOV words was not solved until sequence-to-sequence neural nets de-reified the concept of a word. Many other social phenomena follow power-law distributions. The number of native speakers of the N’th most spoken language, for example, is 1.44 billion over N to the 1.09. In languages with sufficient data, we have shown that monolingual pre-training outperforms multilingual pre-training. In less-frequent languages, multilingual knowledge transfer can significantly reduce phone error rates. In languages with no training data, unsupervised ASR methods can be proven to converge, as long as the eigenvalues of the language model are sufficiently well separated to be measurable. Other systems of social categorization may follow similar power-law distributions. Disability, for example, can cause speech patterns that were never seen in the training database, but not all disabilities need do so. The inability of speech technology to work for people with even common disabilities is probably caused by a lack of data, and can probably be solved by finding better modes of interaction between technology researchers and the communities served by technology.
Biography
Mark Hasegawa-Johnson is a William L. Everitt Faculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign. He has published research in speech production and perception, source separation, voice conversion, and low-resource automatic speech recognition.
Abstract
The arms race to build increasingly larger, powerful language models (LMs) in the past year has been remarkable. Yet incorporating LMs effectively into practical applications that facilitate manual workflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficient and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023), where we used vanilla (unaligned) LMs for aligning itself, an approach that has yielded some success. Then, I will move on to the challenge of tracing the output of LMs to reliable sources, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting, which steers LMs to quote directly from sources observed in its pre-training. If time permits, I will discuss our ongoing project to adapt LMs to interact with web pages. Throughout the presentation, I will highlight our progress, and end with questions about our future progress.
Biography
Daniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Processing (CLSP) member. He is interested in building reasoning-driven modular NLP systems that are robust, transparent, and communicative, particularly those that use natural language as the communication medium. Khashabi has published over 40 papers on natural language processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award, NAACL 2022 Best Paper Award, research gifts from the Allen Institute for AI, and an Amazon Research Award 2023. Before joining Hopkins, he was a postdoctoral fellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019.
Abstract
Visually rich documents (scanned or digital) remain important for many consumer and business use cases. During this talk we will share recent work from our team in the Document Intelligence Lab of Adobe Research to understand, create, and interact with these documents. First, we’ll share a series of work on building models to decompose and understand the structure of documents to support use cases around document analysis and accessibility. Next, we’ll explore document semantic understanding for a project where we convert natural language contract clauses to code to support business automation. Finally, we’ll discuss DocEdit, a model and dataset that enables editing structured documents from natural language.
BIOS:
Rajiv Jain is a Senior Research Scientist in the Document Intelligence Lab in Adobe Research, where his research focuses on understanding the layout, content, and interaction with documents. Prior to joining Adobe, Rajiv was a consultant at DARPA, where he worked on the Media Forensics Program to secure digital imagery. He previously served for 10 years as a researcher for the Department of Defense where he worked on projects around large scale systems, computer vision, and network security. He received his PhD in computer science from the University of Maryland, College Park working in the field of document image analysis and retrieval.
Chris Tensmeyer primarily focuses on multi-modal document layout and content understanding as a Research Scientist in the Document Intelligence Lab of Adobe Research. Since joining Adobe 5 years ago, his work has directly impacted popular Adobe features such as mobile Acrobat Liquid Mode, PDF table extraction, handwriting recognition, and scanned document detection. Other research interests include general Computer Vision and Deep Learning. He received his PhD in Computer Science from Brigham Young University on the topic of Deep Learning for Document Image Analysis.