Multimodality, Context and Continuous Emotional Dynamics for Recognition and Analysis of Emotional Human States, and Applications to Healthcare – Angeliki Metallinou (University of Southern California)

March 1, 2013 all-day

View Seminar Video
Human expressive communication is characterized by the continuous flow of multimodal information, such as vocal, facial and bodily gestures, which may convey the participant’s affect. Additionally, the emotional state of a participant is typically expressed in context, and generally evolves with variable intensity and clarity over the course of an interaction. In this talk, I will present computational approaches to address such complex aspects of emotional expression, namely multimodality, the use of context and continuous emotional dynamics. Firstly, I will describe hierarchical frameworks that incorporate temporal contextual information for emotion recognition, and demonstrate the utility of such approaches, that are able exploit typical emotional patterns, for improving recognition performance. Secondly, extending this notion of emotional evolution, I will describe methods for continuously estimating emotional states, such as the degree of intensity or positivity of a participant’s emotion, during dyadic interactions. Such continuous estimates could highlight emotionally salient regions in long interactions. The systems described are multimodal and combine a variety of information such as speech, facial expressions, and full body language in the context of dyadic settings. Finally, I will discuss the utility of computational approaches for healthcare applications by describing ongoing work on facial expression analysis for the quantification of atypicality in affective facial expressions of children with autism spectrum disorders.
Angeliki Metallinou received her Diploma in electrical and computer engineering from the National Technical University of Athens, Greece, in 2007, and her Masters degree in Electrical Engineering in 2009 from University of Southern California (USC), where she is currently pursuing her Ph.D. degree. Since Fall 2007 she has been a member of the Signal Analysis and Interpretation Lab (SAIL) at USC, where she has worked on projects regarding multimodal emotion recognition, computational analysis of theatrical performance and computational approaches for autism research. During summer 2012, she interned at Microsoft Research working on belief state tracking for spoken dialog systems. Her research interests include speech and multimodal signal processing, affective computing, machine learning, statistical modeling and dialog systems.

Center for Language and Speech Processing