Seminars

Nov
1
Fri
Gopala Anumanchipalli (University of California, San Francisco) “Decoding Speech and Language Representations from the Brain” @ Hackerman Hall B17
Nov 1 @ 12:00 pm – 1:15 pm

Abstract

Spoken communication is basic to who we are. Neurological conditions that result in loss of speech can be devastating for affected patients. This talk will summarize recent efforts in decoding neural activity directly from the surface of the speech cortex during fluent speech production, monitored using intracranial Electrocorticography (ECoG).  Decoding speech from neural activity is challenging because speaking requires very precise and rapid multi-dimensional control of vocal tract articulators. I will first describe the articulatory encoding characteristics in the speech motor cortex and compare them against other representations like the phonemes. I will then describe deep learning approaches to convert neural activity into these articulatory physiological signals that can then be transformed into audible speech acoustics or decoded to text. We show that such biomimetic strategies make optimal use of available data; generalize well across subjects, and also perform silent speech decoding. These results set a new benchmark in the development of Brain-Computer Interfaces for assistive communication in paralyzed individuals with intact cortical function.

Biography

Gopala Anumanchipalli, PhD is an associate researcher at the Department of Neurological Surgery at University of California, San Francisco. His research is in understanding neural mechanisms of human speech production towards developing next generation Brain-Computer Interfaces.  Gopala was a postdoctoral fellow at UCSF working with Edward F Chang, MD and has previously received PhD in Language and Information Technologies from Carnegie Mellon University working with Prof. Alan Black on speech synthesis.

 

 

Mar
9
Mon
Matt Gardner (Allen Institute for Artificial Intelligence) “NLP Evaluations that We Believe In” @ Hackerman Hall B17
Mar 9 @ 12:00 pm – 1:15 pm

Abstract

With all of the modeling advancements in recent years, NLP benchmarks have been falling over left and right: “human performance” has been reached on SQuAD 1 and 2, GLUE and SuperGLUE, and many commonsense datasets.  Yet no serious researcher actually believes that these systems understand language, or even really solve the underlying tasks behind these datasets.  To get benchmarks that we actually believe in, we need to both think more deeply about the language phenomena that our benchmarks are targeting, and make our evaluation sets more rigorous.  I will first present ORB, an Open Reading Benchmark that collects many reading comprehension datasets that we (and others) have recently built, targeting various aspects of what it means to read.  I will then present contrast sets, a way of creating non-iid test sets that more thoroughly evaluate a model’s abilities on some task, decoupling training data artifacts from test labels.

Biography

Matt is a senior research scientist at the Allen Institute for AI on the AllenNLP team. His research focuses primarily on getting computers to read and answer questions, dealing both with open domain reading comprehension and with understanding question semantics in terms of some formal grounding (semantic parsing). He is particularly interested in cases where these two problems intersect, doing some kind of reasoning over open domain text. He is the original architect of the AllenNLP toolkit, and he co-hosts the NLP Highlights podcast with Waleed Ammar and Pradeep Dasigi.

 

 

Center for Language and Speech Processing