BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNeural sequence generation systems oftentimes generate sequences by searching for the most likely se quence under the learnt probability distribution. This assumes that the mo st likely sequence\, i.e. the mode\, under such a model must also be the b est sequence it has to offer (often in a given context\, e.g. conditioned on a source sentence in translation). Recent findings in neural machine tr anslation (NMT) show that the true most likely sequence oftentimes is empt y under many state-of-the-art NMT models. This follows a large list of oth er pathologies and biases observed in NMT and other sequence generation mo dels: a length bias\, larger beams degrading performance\, exposure bias\, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this : it is mode-seeking search\, e.g. beam search\, that introduces many of t hese pathologies and biases\, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation d istributions do capture important aspects of translation well in expectati on. Therefore\, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one exam ple of such a decision rule\, and show that this is a fruitful research di rection.
\nBiography
\nI am an assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language L earning group.
\nMy work concerns the design of models and algor ithms that learn to represent\, understand\, and generate language data. E xamples of specific problems I am interested in include language modelling \, machine translation\, syntactic parsing\, textual entailment\, text cla ssification\, and question answering.
\nI also develop techniques to approach general machine learning problems such as probabilistic inferenc e\, gradient and density estimation.
\nMy interests sit at the inter section of disciplines such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational li nguistics.
\n\n
DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We a re excited about a new model for robotics\, designed for generalization ac ross diverse environments and instructions. This model is focused on scala ble data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experience\, and can be quickly adapted to work in the r eal-world through limited interactions. In this talk\, we’ll share some of our recent work in this direction in both manipulation and locomotion app lications.
\nBiography
\nCarolina
Abstract
\nWhile there is a vast amou nt of text written about nearly any topic\, this is often difficult for so meone unfamiliar with a specific field to understand. Automated text simpl ification aims to reduce the complexity of a document\, making it more com prehensible to a broader audience. Much of the research in this field has traditionally focused on simplification sub-tasks\, such as lexical\, synt actic\, or sentence-level simplification. However\, current systems strugg le to consistently produce high-quality simplifications. Phrase-based mode ls tend to make too many poor transformations\; on the other hand\, recent neural models\, while producing grammatical output\, often do not make al l needed changes to the original text. In this thesis\, I discuss novel ap proaches for improving lexical and sentence-level simplification systems. Regarding sentence simplification models\, after noting that encouraging d iversity at inference time leads to significant improvements\, I take a cl oser look at the idea of diversity and perform an exhaustive comparison of diverse decoding techniques on other generation tasks. I also discuss the limitations in the framing of current simplification tasks\, which preven t these models from yet being practically useful. Thus\, I also propose a retrieval-based reformulation of the problem. Specifically\, starting with a document\, I identify concepts critical to understanding its content\, and then retrieve documents relevant for each concept\, re-ranking them ba sed on the desired complexity level.
\nBiography
\nI’m a research scientist at the HLTCOE at Johns Hopkins University. My primary research interests are in language generati on\, diverse and constrained decoding\, and information retrieval. During my PhD I focused mainly on the task of text simplification\, and now am wo rking on formulating structured prediction problems as end-to-end generati on tasks. I received my PhD in July 2021 from the University of Pennsylvan ia with Chris Callison-Burch and Marianna Apidianaki.
\nDTSTART;TZID=America/New_York:20211022T120000 DTEND;TZID=America/New_York:20211022T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Reno Kriz (HLTCOE – JHU) “Towards a Practically Useful Text Simplif ication System” URL:https://www.clsp.jhu.edu/events/reno-kriz-hltcoe-jhu-towards-a-practica lly-useful-text-simplification-system/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,Kriz\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-21023@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nSpeech data is notoriously difficult to work with due to a variety of codecs\, length s of recordings\, and meta-data formats. We present Lhotse\, a speech data representation library that draws upon lessons learned from Kaldi speech recognition toolkit and brings its concepts into the modern deep learning ecosystem. Lhotse provides a common JSON description format with correspon ding Python classes and data preparation recipes for over 30 popular speec h corpora. Various datasets can be easily combined together and re-purpose d for different tasks. The library handles multi-channel recordings\, long recordings\, local and cloud storage\, lazy and on-the-fly operations amo ngst other features. We introduce Cut and CutSet concepts\, which simplify common data wrangling tasks for audio and help incorporate acoustic conte xt of speech utterances. Finally\, we show how Lhotse leverages PyTorch da ta API abstractions and adopts them to handle speech data for deep learnin g.
\nBiography
\nPiotr Zelasko is an a ssistant research scientist in the Center for Language and Speech Processi ng (CLSP) who specializes in automatic speech recognition (ASR) and spoken language understanding (SLU). His current research focuses on applying mu ltilingual and crosslingual speech recognition systems to categorize the p honetic inventory of a previously unknown language and on improving defens es against adversarial attacks on both speaker identification and automati c speech recognition systems. He is also addressing the question of how to structure a spontaneous conversation into high-level semantic units such as dialog acts or topics. Finally\, he is working on Lhotse + K2\, the nex t-generation speech processing research software ecosystem. Before joining Johns Hopkins\, Zelasko worked as a machine learning consultant for Avaya (2017-2019)\, and as a machine learning engineer for Techmo (2015-2017). Zelasko received his PhD (2019) in electronics engineering\, as well as hi s master’s (2014) and undergraduate degrees (2013) in acoustic engineering from AGH University of Science and Technology in Kraków\, Poland.
DTSTART;TZID=America/New_York:20211029T120000 DTEND;TZID=America/New_York:20211029T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore MD 21218 SEQUENCE:0 SUMMARY:Piotr Zelasko (CLSP at JHU) “Lhotse: a speech data representation l ibrary for the modern deep learning ecosystem” URL:https://www.clsp.jhu.edu/events/piotr-zelasko-clsp-at-jhu-lhotse-a-spee ch-data-representation-library-for-the-modern-deep-learning-ecosystem/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,October\,Zelasko END:VEVENT BEGIN:VEVENT UID:ai1ec-22380@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nThe availability of large multilingual pre-trained language models has opened up exciting pathways f or developing NLP technologies for languages with scarce resources. In thi s talk I will advocate for the need to go beyond the most common languages in multilingual evaluation\, and on the challenges of handling new\, unse en-during-training languages and varieties. I will also share some of my e xperiences with working with indigenous and other endangered language comm unities and activists.
\nBiography
\nAntonios Anastasopoulos is an As sistant Professor in Computer Science at George Mason University. In 2019\ , Antonis received his PhD in Computer Science from the University of Notr e Dame and then worked as a postdoctoral researcher at the Language Techno logies Institute at Carnegie Mellon University. His research interests rev olve around computational linguistics and natural language processing with a focus on low-resource settings\, endangered languages\, and cross-lingu al learning.
\nDTSTART;TZID=America/New_York:20220930T120000 DTEND;TZID=America/New_York:20220930T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Antonios Anastasopoulos (George Mason University) “NLP Beyond the T op-100 Languages” URL:https://www.clsp.jhu.edu/events/antonis-anastasopoulos-george-mason-uni versity/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,Anastasopoulos\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-22423@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20221007T120000 DTEND;TZID=America/New_York:20221007T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Ariya Rastrow (Amazon) URL:https://www.clsp.jhu.edu/events/ariya-rastrow-amazon-2/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,October\,Rastrow END:VEVENT BEGIN:VEVENT UID:ai1ec-22394@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nModel robustness and spurious correlations have received increasing atten tion in the NLP community\, both in methods and evaluation. The term “spur ious correlation” is overloaded though and can refer to any undesirable sh ortcuts learned by the model\, as judged by domain experts.
\nWhen designing mitigation algorithms\, we oft en (implicitly) assume that a spurious feature is irrelevant for predictio n. However\, many features in NLP (e.g. word overlap and negation) are not spurious in the sense that the background is spurious for classifying obj ects in an image. In contrast\, they carry important information that’s ne eded to make predictions by humans. In this talk\, we argue that it is mor e productive to characterize features in terms of their necessity and suff iciency for prediction. We then discuss the implications of this categoriz ation in representation\, learning\, and evaluation.
\nBiogr aphy
\nHe He is an Assistant Professor in the Department of Computer Science and the Center for Data Science at New York University. She obtained her PhD in Computer Science at the University of Maryland\, C ollege Park. Before joining NYU\, she spent a year at AWS AI and was a pos t-doc at Stanford University before that. She is interested in building ro bust and trustworthy NLP systems in human-centered settings. Her recent re search focus includes robust language understanding\, collaborative text g eneration\, and understanding capabilities and issues of large language mo dels.
\n DTSTART;TZID=America/New_York:20221014T120000 DTEND;TZID=America/New_York:20221014T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:He He (New York University) “What We Talk about When We Talk about Spurious Correlations in NLP” URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,He\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-22395@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAbstract
\nModern learning architectures for natural language processing have been very suc cessful in incorporating a huge amount of texts into their parameters. How ever\, by and large\, such models store and use knowledge in distributed a nd decentralized ways. This proves unreliable and makes the models ill-sui ted for knowledge-intensive tasks that require reasoning over factual info rmation in linguistic expressions. In this talk\, I will give a few examp les of exploring alternative architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by r epresenting\, storing and accessing knowledge in a dedicated memory compon ent.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.
\nBiography
\nFei is a research scientist at Google Research. Before that\, he was a Profess or of Computer Science at University of Southern California. His primary r esearch interests are machine learning and its application to various AI p roblems: speech and language processing\, computer vision\, robotics and r ecently weather forecast and climate modeling. He has a PhD (2007) from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-23515@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nHow important are different temporal s peech modulations for speech recognition? We answer this question from two complementary perspectives. Firstly\, we quantify the amount of phonetic information in the modulation spectrum of speech by computing the mutual i nformation between temporal modulations with frame-wise phoneme labels. Lo oking from another perspective\, we ask – which speech modulations an Auto matic Speech Recognition (ASR) system prefers for its operation. Data-driv en weights are learned over the modulation spectrum and optimized for an e nd-to-end ASR task. Both methods unanimously agree that speech information is mostly contained in slow modulation. Maximum mutual information occurs around 3-6 Hz which also happens to be the range of modulations most pref erred by the ASR. In addition\, we show that the incorporation of this kno wledge into ASRs significantly reduces their dependency on the amount of t raining data.
\n\n
Learning How to Play With The Machines: Taking Stock of Wher e the Collaboration Between Computational and Social Science Stands
\n< p> \nSpeakers: Jeff Gill\, Ernesto Calvo\, Hale Sirin and Antonios Anastasopoulos
DTSTART;TZID=America/New_York:20230407T120000 DTEND;TZID=America/New_York:20230407T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street SEQUENCE:0 SUMMARY:JHU CLSP APSA Roundtable on Learning How to Play with the Machines URL:https://www.clsp.jhu.edu/events/jhu-clsp-apsa-roundtable-on-learning-ho w-to-play-with-the-machines/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,APSA Roundtable END:VEVENT BEGIN:VEVENT UID:ai1ec-23586@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230410T120000 DTEND;TZID=America/New_York:20230410T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Ruizhe Huang URL:https://www.clsp.jhu.edu/events/student-seminar-ruizhe-huang/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Huang END:VEVENT BEGIN:VEVENT UID:ai1ec-23588@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAdvanc es in open domain Large Language Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM\, and LLaMA have facilitated dramatic improvem ents in conversational systems. These improvements include an unprecedente d breadth of conversational interactions between humans and machines while maintaining and sometimes surpassing the accuracy of systems trained spec ifically for known\, closed domains. However\, many applications still req uire higher levels of accuracy than pre-trained LLMs can provide. There ar e many studies underway to accomplish this. Broadly speaking\, the methods assume the pre-trained models are fixed (due to cost/time)\, and instead look to various augmentation methods including prompting strategies and mo del adaptation/fine-tuning.
\nOne augmentation s trategy leverages the context of the conversation. For example\, who are t he participants and what is known about these individuals (personal contex t)\, what was just said (dialogue context)\, where is the conversation tak ing place (geo context)\, what time of day and season is it (time context) \, etc. A powerful form of context is the shared visual setting of the co nversation between the human(s) and machine. The shared visual scene may b e from a device (phone\, smart glasses) or represented on a screen (browse r\, maps\, etc.) The elements in the visual context can be exploited by gr ounding the natural language conversational interaction\, thereby changing the priors of certain concepts and increasing the accuracy of the system. In this talk\, I will present some of my historical work in this area as well as my recent work in the AI Virtual Assistant (AVA) Lab at Georgia Te ch.
\nBio
\nDr. Larry Hec k is a Professor with a joint appointment in the School of Electrical and Computer Engineering and the School of Interactive Computing at the Georgi a Institute of Technology. He holds the Rhesa S. Farmer Distinguished Chai r of Advanced Computing Concepts and is a Georgia Research Alliance Eminen t Scholar. His received the BSEE from Texas Tech University (1986)\, and M SEE and PhD EE from the Georgia Institute of Technology (1989\,1991). He i s a Fellow of the IEEE\, inducted into the Academy of Distinguished Engine ering Alumni at Georgia Tech and received the Distinguished Engineer Award from the Texas Tech University Whitacre College of Engineering. He was a Senior Research Engineer with SRI (1992-98)\, Vice President of R&D at Nua nce (1998-2005)\, Vice President of Search and Advertising Sciences at Yah oo! (2005-2009)\, Chief Scientist of the Microsoft Speech products and Dis tinguished Engineer in Microsoft Research (2009-2014)\, Principal Scientis t with Google Research (2014-2017)\, and CEO of Viv Labs and SVP at Samsun g (2017-2021).
\n\n
Abstract
\nOur models achieve state-of-the-art performance and lay im portant groundwork towards realizing a universal translation system. At th e same time\, we keep making open-source contributions for everyone to kee p advancing the research for the languages they care about.
\nPaco is Research Scientist Manager supporting trans lation teams in Meta AI (FAIR). He works in the field of machine translati on with a focus on low-resource translation (e.g. NLLB\, FLORES) and the a im to break language barriers. He joined Meta in 2016. His research has be en published in top-tier NLP venues like ACL\, EMNLP. He was the co-chair of the Research director at AMTA (2020-2022). He has ave organized several research competitions focused on low-resource translation and data filter ing. Paco obtained his PhD from the ITESM in Mexico\, was a visiting schol ar at the LTI-CMU from 2008-2009\, and participated in DARPA’s GALE evalua tion program. Paco was a post-doc and scientist at Qatar Computing Researc h Institute in Qatar in 2012-2016
DTSTART;TZID=America/New_York:20230417T120000 DTEND;TZID=America/New_York:20230417T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Paco Guzman (Meta AI) “Building a Universal Translation System to B reak Down Language Barriers” URL:https://www.clsp.jhu.edu/events/paco-guzman-meta-ai/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Guzman END:VEVENT BEGIN:VEVENT UID:ai1ec-23592@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nLarge language models (LLM s) have ushered in exciting capabilities in language understanding and tex t generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indistinguishable from humans. While this has obviously raised conversational systems and chatbots to a new level\, it also presen ts exciting new opportunities for building artificial agents with improved decision making capabilities. Specifically\, the ability to reason with l anguage can allow us to build agents that can 1) execute complex action se quences to effect change in the world\, 2) learn new skills by ‘reading’ i n addition to ‘doing’\, and 3) allow for easier personalization and contro l over their behavior. In this talk\, I will demonstrate how we can build such language-enabled agents that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and rob otic tool manipulation. In the end\, I will also discuss some dangers of u sing these LLM-based systems and some challenges that lie ahead in ensurin g their safe use.
\nBiography
\nKarthi k Narasimhan is an assistant professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning\, with the goal of building intelligent agents that learn to oper ate in the world through both their own experience (”doing things”) and le veraging existing human knowledge (”reading about things”). Karthik receiv ed his PhD from MIT in 2017\, and spent a year as a visiting research scie ntist at OpenAI contributing to the GPT language model\, prior to joining Princeton in 2018. His research has been recognized by the NSF CAREER\, a Google Research Scholar Award\, an Amazon research award (2019)\, Bell Lab s runner-up prize and outstanding paper awards at EMNLP (2015\, 2016) and NeurIPS (2022).
DTSTART;TZID=America/New_York:20230421T120000 DTEND;TZID=America/New_York:20230421T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karthik Narasimhan (Princeton University) ” Towards General-Purpose Language-Enabled Agents: Machines that can Read\, Think and Act” URL:https://www.clsp.jhu.edu/events/karthik-narasimhan-princeton-university / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Narasimhan END:VEVENT BEGIN:VEVENT UID:ai1ec-23606@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230424T120000 DTEND;TZID=America/New_York:20230424T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Brian Lu URL:https://www.clsp.jhu.edu/events/student-seminar-brian-lu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Lu END:VEVENT BEGIN:VEVENT UID:ai1ec-23608@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAutomated analysis of stud ent writing has the potential to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructor s to assess students’ reasoning skills based on their long-form writing. F urther\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of writt en communication. Our methods focus on the specific ideas students expres s to support formative assessment through different kinds of feedback\, wh ich aims to scaffold their abilities to reason and communicate. In this ta lk I review our work in the PSU NLP lab on methods for automated assessmen t of different forms of student writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaborati on with researchers in STEM education\, results from deployment of an olde r content analysis tool on middle school physics essays\, and very prelimi nary results on assessment of college students’ physics lab reports. I wi ll also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.
\nBio
\nBecky Passonneau has been a Professor in the Department of Computer Science and Engineering at Penn State University s ince 2016\, when she joined as the first NLP researcher. Since that time t he NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s res earch in natural language processing addresses computational pragmatics\, meaning the investigation of language as a system of interactive behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chicago in 1985\, and worked at several academic an d industry research labs before joining Penn State. Her work is reported i n over 140 publications in journals and refereed conference proceedings\, and has been funded through 27 sponsored projects from 16 sources\, inclu ding government agencies\, corporate sponsors\, corporate gifts\, and foun dations..
DTSTART;TZID=America/New_York:20230428T120000 DTEND;TZID=America/New_York:20230428T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Becky Passonneau (Penn State University) ” Automated Support to Sca ffold Students’ Short- and Long-form STEM Writing” URL:https://www.clsp.jhu.edu/events/becky-passonneau-penn-state-university- automated-support-to-scaffold-students-short-and-long-form-stem-writing/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Passonneau END:VEVENT BEGIN:VEVENT UID:ai1ec-23900@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20231002T120000 DTEND;TZID=America/New_York:20231002T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:CLSP Student Seminar – Anna Favaro URL:https://www.clsp.jhu.edu/events/clsp-student-seminar-anna-favaro/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Favaro\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-24115@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nOur goal is to use AI to a utomatically find tax minimization strategies\, an approach which we call “Shelter Check.” It would come in two variants. Existing-Authority Shelter Check would aim to find whether existing tax law authorities can be combi ned to create tax minimization strategies\, so the IRS or Congress can shu t them down. New-Authority Shelter Check would automate checking whether a new tax law authority – like proposed legislation or a draft court decisi on – would combine with existing authorities to create a new tax minimizat ion strategy. We had initially had high hopes for GPT-* large language mod els for implementing Shelter Check\, but our tests have showed that they d o very poorly at basic legal reasoning and handling legal text. So we are now creating a benchmark and training data for LLM’s handling legal text\, hoping to spur improvements.
DTSTART;TZID=America/New_York:20231006T120000 DTEND;TZID=America/New_York:20231006T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:CLSP Student Seminar – Andrew Blair-Stanek “Shelter Check and GPT-4 ’s Bad Legal Performance” URL:https://www.clsp.jhu.edu/events/clsp-student-seminar-andrew-blair-stane k-shelter-check-and-gpt-4s-bad-legal-performance/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Blair-Stanek\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-24005@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nLarge-scale generative models such as GPT and DALL-E have r evolutionized natural language processing and computer vision research. Th ese models not only generate high fidelity text or image outputs\, but als o demonstrate impressive domain and task generalization capabilities. In c ontrast\, audio generative models are relatively primitive in scale and ge neralization.
\nIn this talk\, I will start with a brief introduction on conventional n eural speech generative models and discuss why they are unfit for scaling to Internet-scale data. Next\, by reviewing the latest large-scale generat ive models for text and image\, I will outline a few lines of promising ap proaches to build scalable speech models. Last\, I will present Voicebox\, our latest work to advance this area. Voicebox is the most versatile gene rative model for speech. It is trained with a simple task — text condition ed speech infilling — on over 50K hours of multilingual speech with a powe rful flow-matching objective. Through in-context learning\, Voicebox can p erform monolingual/cross-lingual zero-shot TTS\, holistic style conversion \, transient noise removal\, content editing\, and diverse sample generati on. Moreover\, Voicebox achieves state-of-the-art performance and excellen t run-time efficiency.
\nBiography
\nWei-Ning Hsu is a resear ch scientist at Meta Foundational AI Research (FAIR) and currently the lea d of the audio generation team. His research focuses on self-supervised le arning and generative models for speech and audio. His pioneering work inc ludes HuBERT\, AV-HuBERT\, TextlessNLP\, data2vec\, wav2vec-U\, textless s peech translation\, and Voicebox.
\nPrior to joining Meta\, Wei-Ning worked at MERL an d Google Brain as a research intern. He received his Ph.D. and S.M. degree s in Electrical Engineering and Computer Science from Massachusetts Instit ute of Technology in 2020 and 2018\, under the supervision of Dr. James Gl ass. He received his B.S. degree in Electrical Engineering from National T aiwan University in 2014\, under the supervision of Prof. Lin-shan Lee and Prof. Hsuan-Tien Lin.
DTSTART;TZID=America/New_York:20231009T120000 DTEND;TZID=America/New_York:20231009T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Wei-Ning Hsu (Meta Foundational AI Research) “Large Scale Universal Speech Generative Models” URL:https://www.clsp.jhu.edu/events/wei-ning-hsu-meta-foundational-ai-resea rch-large-scale-universal-speech-generative-models/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Hsu\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-23902@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAbstract
\nRecent advances in speech technology make heavy use of pre-trained models that learn from large quan tities of raw (untranscribed) speech\, using “self-supervised” (ie unsuper vised) learning. These models learn to transform the acoustic input into a different representational format that makes supervised learning (for tas ks such as transcription or even translation) much easier. However\, *what * and *how* speech-relevant information is encoded in these representation s is not well understood. I will talk about some work at various stages of completion in which my group is analyzing the structure of these represen tations\, to gain a more systematic understanding of how word-level\, phon etic\, and speaker information is encoded.
\nBiography
\nSharon Goldwater is a Professor in the Institute for Language\ , Cognition and Computation at the University of Edinburgh’s School of Inf ormatics. She received her PhD in 2007 from Brown University and spent two years as a postdoctoral researcher at Stanford University before moving t o Edinburgh. Her research interests include unsupervised and minimally-sup ervised learning for speech and language processing\, computer modelling o f language acquisition in children\, and computational studies of language use. Her main focus within linguistics has been on the lower levels of s tructure including phonetics\, phonology\, and morphology.
DTSTART;TZID=America/New_York:20231027T120000 DTEND;TZID=America/New_York:20231027T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sharon Goldwater (University of Edinburgh) “Analyzing Representatio ns of Self-Supervised Speech Models” URL:https://www.clsp.jhu.edu/events/sharon-goldwater-university-of-edinburg h/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Goldwater\,October END:VEVENT BEGIN:VEVENT UID:ai1ec-24491@www.clsp.jhu.edu DTSTAMP:20240328T122937Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240401T120000 DTEND;TZID=America/New_York:20240401T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Yuan Gong URL:https://www.clsp.jhu.edu/events/yuan-gong/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Gong END:VEVENT END:VCALENDAR Prof. Goldwater has received awards incl uding the 2016 Roger Needham Award from the British Computer Society for “ distinguished research contribution in computer science by a UK-based rese archer who has completed up to 10 years of post-doctoral research.” She ha s served on the editorial boards of several journals\, including Computati onal Linguistics\, Transactions of the Association for Computational Lingu istics\, and the inaugural board of OPEN MIND: Advances in Cognitive Scien ce. She was a program chair for the EACL 2014 Conference and chaired the E ACL governing board from 2019-2020.