NLP Research for Commercial Development of Writing Evaluation Capabilities – Jill Burstein (ETS)
View Seminar Video
Automated essay scoring was initially motivated by its potential cost savings for large-scale writing assessments. However, as automated essay scoring became more widely available and accepted, teachers and assessment experts realized that the potential of the technology could go way beyond just essay scoring. Over the past five years or so, there has been rapid development and commercial deployment of automated essay evaluation for both large-scale assessment and classroom instruction. A number of factors contribute to an essay score, including varying sentence structure, grammatical correctness, appropriate word choice, errors in spelling and punctuation, use of transitional words/phrases, and organization and development. Instructional software capabilities exist that provide essay scores and evaluations of student essay writing in all of these domains. The foundation of automated essay evaluation software is rooted in NLP research. This talk will walk through the development of CriterionSM, e-rater, and Critique writing analysis tools, automated essay evaluation software developed at Educational Testing Service.
Jill Burstein is a Principal Development Scientist at Educational Testing Service. She received her Ph.D. in Linguistics from the City University of New York, Graduate Center. The focus of her research is on the development of automated writing evaluation technology. She is one of the inventors of e-rater, an automated essay scoring system developed at Educational Testing Service. She has collaborated on the research and development of capabilities that provide evaluative feedback on student writing for grammar, usage, mechanics, style, and discourse analysis for CriterionSM, a web-based writing instruction application. She is co-editor of the book “Automated Essay Scoring: A Cross-Disciplinary Perspective”.