Communication Disorders and Speech Technology – Elmar Noeth (Friedrich-Alexander University Erlangen-Nuremberg)

When:
December 1, 2009 all-day
2009-12-01T00:00:00-05:00
2009-12-02T00:00:00-05:00

Abstract
In this talk we will give an overview of the different kinds of communication disorders. We will concentrate on communication disorders related to language and speech (i.e., not look at disorders like blindness or deafness). Speech and language disorders can range from simple sound substitution to the inability to understand or use language. Thus, a disorder may affect one or several linguistic levels: A patient with an articulation disorder cannot correctly produce speech sounds (phonemes) because of voice disorders or imprecise placement, timing, pressure, speed, or flow of movement of the lips, tongue, or throat. His speech may be acoustically unintelligible, yet the syntactic, semantic, and pragmatic level are not affected. With other pathologies, e.g. Wernicke’s aphasia, the acoustics of the speech signal might be intelligible, yet the patient is due to mixup of words (semantic paraphasia) or sounds (phonematic paraphasia) unintelligible. We will look at what linguistic knowledge has to be modeled in order to analyze different pathologies with speech technology, how difficult the task is, and how speech technology is able to support the speech therapist for the tasks diagnosis, therapy control, comparison of therapies, and screening.Joint work with Andreas Maier, Tino Haderlein, Stefan Steidl, and Maria Schuster.
Biography
Elmar Noeth studied in Erlangen and at MIT and obtained his ‘Diplom’ in Computer Science and his doctoral degree at the Friedrich-Alexander-Universität Erlangen-Nürnberg in 1985 and 1990, respectively. From 1985 to 1990 he was a member of the research staff of the Institute for Pattern Recognition (Lehrstuhl für Mustererkennung), working on the use of prosodic information in automatic speech understanding. Since 1990 he has been an assistant professor and since 2008 he is a full professor at the same institute and head of the speech group. He is one of the founders of the Sympalog company, which markets conversational dialog systems. He is on the editorial board of Speech Communication and EURASIP Journal on Audio, Speech, and Music Processing and member of the IEEE SLTC. His current interests are prosody, analysis of pathologic speech, computer aided language learning and emotion analysis.

Center for Language and Speech Processing