Abstract
The written arguments of students are educational data that can be automatically mined for purposes of student instruction and assessment. This talk will illustrate some of the opportunities and challenges in educationally-oriented argument mining from text. I will first describe how we are using natural processing to develop argument mining systems that are being embedded in educational technologies for essay grading, peer review, and writing revision analysis. I will then present the results of empirical evaluations of these technologies, using argumentative writing data obtained from elementary, high school, and university students.
Biography
Diane Litman is Professor of Computer Science, Senior Scientist with the Learning Research and Development Center, and Faculty Co-Director of the Graduate Program in Intelligent Systems, all at the University of Pittsburgh. Previously she was a member of the Artificial Intelligence Principles Research Department, AT&T Labs – Research (formerly Bell Laboratories). Dr. Litman’s current research focuses on enhancing the effectiveness of educational technology through the use of spoken and natural language processing techniques such as argument mining, summarization, and dialogue systems. Dr. Litman has been Chair of the North American Chapter of the Association for Computational Linguistics, has co-authored multiples papers winning best paper awards, and has been awarded Senior Member status by the Association for the Advancement of Artificial Intelligence.
Abstract
In defining “language understanding” for the purposes of natural language processing (NLP), we must inevitably be informed by human cognition: the only existing system that has achieved language understanding. Use of human cognition to evaluate NLP systems is nothing new – nearly any NLP benchmark relies on some form of comparison to human judgments or productions. In this talk I will discuss a series of projects taking this rationale a step further, examining NLP systems’ capturing of information by drawing on our knowledge of information sensitivity at a number of different levels of human cognition. Ideally we want our systems to extract and represent the same information that humans do at the endpoint of language comprehension – and because we have an idea of what that information is, we can test for it accordingly. However, we find at times that the representational patterns observed in our NLP systems show parallels instead with earlier stages of human language processing that reflect coarser information sensitivity. I discuss experiments examining both of these types of parallels: tests probing the extent to which NLP representations reflect the compositional meaning information to be expected in the final representation of a sentence, as well as tests examining the correspondence of NLP representations with earlier stages of human comprehension reflected in human predictive brain responses. I discuss the implications of these results for our assessment of these models, and for our targeting of ideal representational capacities as we improve our models moving forward.
Bio
Allyson Ettinger is an Assistant Professor in the Department of Linguistics at the University of Chicago. Her interdisciplinary work draws on methods and insights from cognitive science, linguistics, and computer science to examine the extraction, representation, and deployment of meaning information during language processing in humans and NLP systems. She received her PhD in Linguistics from the University of Maryland, and spent one year as research faculty at the Toyota Technological Institute at Chicago (TTIC) before beginning her appointment at the University of Chicago. She holds an additional courtesy appointment at TTIC.