BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23314@www.clsp.jhu.edu DTSTAMP:20240328T213301Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile GPT models have shown impressive performance on summarization and open-ended text generation\, it’s important to assess t heir abilities on more constrained text generation tasks that require sign ificant and diverse rewritings. In this talk\, I will discuss the challeng es of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) paraphrase generation and (ii) text simplifi cation. To address these challenges\, we introduce an interactive Rank-and -Rate evaluation framework. Our results show that GPT-3.5 has made a major step up from fine-tuned T5 in paraphrase generation\, but still lacks the diversity and creativity of humans who spontaneously produce large quanti ties of paraphrases.\nAdditionally\, we demonstrate that GPT-3.5 performs similarly to a single human in text simplification\, which makes it diffic ult for existing automatic evaluation metrics to distinguish between the t wo. To overcome this shortcoming\, we propose LENS\, a learnable evaluatio n metric that outperforms SARI\, BERTScore\, and other existing methods in both automatic evaluation and minimal risk decoding for text generation. \nBiography\nWei Xu is an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology\, where she is also affi liated with the new NSF AI CARING Institute and Machine Learning Center. S he received her Ph.D. in Computer Science from New York University and her B.S. and M.S. from Tsinghua University. Xu’s research interests are in na tural language processing\, machine learning\, and social media\, with a f ocus on text generation\, stylistics\, robustness and controllability of m achine learning models\, and reading and writing assistive technology. She is a recipient of the NSF CAREER Award\, CrowdFlower AI for Everyone Awar d\, Criteo Faculty Research Award\, and Best Paper Award at COLING’18. She has also received funds from DARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/NLP conferences. DTSTART;TZID=America/New_York:20230224T120000 DTEND;TZID=America/New_York:20230224T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Wei Xu (Georgia Tech) “GPT-3 vs Humans: Rethinking Evaluation of Na tural Language Generation” URL:https://www.clsp.jhu.edu/events/wei-xu-georgia-tech/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nWhile GPT mo dels have shown impressive performance on summarization and open-ended tex t generation\, it’s important to assess their abilities on more constraine d text generation tasks that require significant and diverse rewritings. I n this talk\, I will discuss the challenges of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) par aphrase generation and (ii) text simplification. To address these challeng es\, we introduce an interactive Rank-and-Rate evaluation framework. Our r esults show that GPT-3.5 has made a major step up from fine-tuned T5 in pa raphrase generation\, but still lacks the diversity and creativity of huma ns who spontaneously produce large quantities of paraphrases.
\nAdditionally\, we demon strate that GPT-3.5 performs similarly to a single human in text simplific ation\, which makes it difficult for existing automatic evaluation metrics to distinguish between the two. To overcome this shortcoming\, we propose LENS\, a learnable evaluation metric that outperforms SARI\, BERTScore\, and other existing methods in both automatic evaluation and minimal risk d ecoding for text generation.
\nBiography
\nWei Xu is an assis tant professor in the School of Interactive Computing at the Georgia Insti tute of Technology\, where she is also affiliated with the new NSF AI CARI NG Institute and Machine Learning Center. She received her Ph.D. in Comput er Science from New York University and her B.S. and M.S. from Tsinghua Un iversity. Xu’s research interests are in natural language processing\, mac hine learning\, and social media\, with a focus on text generation\, styli stics\, robustness and controllability of machine learning models\, and re ading and writing assistive technology. She is a recipient of the NSF CARE ER Award\, CrowdFlower AI for Everyone Award\, Criteo Faculty Research Awa rd\, and Best Paper Award at COLING’18. She has also received funds from D ARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/NLP conferences.
\n X-TAGS;LANGUAGE=en-US:2023\,February\,Xu END:VEVENT END:VCALENDAR