BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20115@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).\nBiography\nJiri Mekyska is the head of the BDALab (Brain Diseases Analysis Laboratory) at the Brno University of Technology\, wher e he leads a multidisciplinary team of researchers (signal processing engi neers\, data scientists\, neurologists\, psychologists) with a special foc us on the development of new digital endpoints and digital biomarkers enab ling to better understand\, diagnose and monitor neurodegenerative (e.g. P arkinson’s disease) and neurodevelopmental (e.g. dysgraphia) diseases. DTSTART;TZID=America/New_York:20210329T120000 DTEND;TZID=America/New_York:20210329T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Jiri Mekyska (Brno University of Technology) “Data Science in Small Medical Data Sets: From Logistic Regression Towards Logistic Regression” URL:https://www.clsp.jhu.edu/events/jiri-mekyska-brno-university-of-technol ogy/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).
\nBiography
\nAbstr act
\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\, that is the construction of an IE model for some target language\, given existing annotations exclu sively in some other language. This work is part of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques inc luding data projection and self-training\, and how different pretrained en coders impact them. We find through extensive experiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.
\nBiography
\nAbstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have questioned the robustness of longit udinal analyses based on statistical methods due to issues of temporal bia s and semantic shift. To what extent are changes in semantics over time af fecting the reliability of longitudinal analyses? We examine this question through a case study: understanding shifts in mental health during the co urse of the COVID-19 pandemic. We demonstrate that a recently-introduced m ethod for measuring semantic shift may be used to proactively identify fai lure points of language-based models and improve predictive generalization over time. Ultimately\, we find that these analyses are critical to produ cing accurate longitudinal studies of social media.
\n X-TAGS;LANGUAGE=en-US:2022\,February\,Harrigian END:VEVENT BEGIN:VEVENT UID:ai1ec-21277@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAs humans\, our understanding of language is grounded in a rich mental model about “how the world works” – that we learn throug h perception and interaction. We use this understanding to reason beyond w hat we literally observe or read\, imagining how situations might unfold i n the world. Machines today struggle at this kind of reasoning\, which lim its how they can communicate with humans.In my talk\, I will discuss three lines of work to bridge this gap between machines and humans. I will firs t discuss how we might measure grounded understanding. I will introduce a suite of approaches for constructing benchmarks\, using machines in the lo op to filter out spurious biases. Next\, I will introduce PIGLeT: a model that learns physical commonsense understanding by interacting with the wor ld through simulation\, using this knowledge to ground language. From an E nglish-language description of an event\, PIGLeT can anticipate how the wo rld state might change – outperforming text-only models that are orders of magnitude larger. Finally\, I will introduce MERLOT\, which learns about situations in the world by watching millions of YouTube videos with transc ribed speech. Through training objectives inspired by the developmental ps ychology idea of multimodal reentry\, MERLOT learns to fuse language\, vis ion\, and sound together into powerful representations.Together\, these di rections suggest a path forward for building machines that learn language rooted in the world.\nBiography\nRowan Zellers is a final year PhD candida te at the University of Washington in Computer Science & Engineering\, adv ised by Yejin Choi and Ali Farhadi. His research focuses on enabling machi nes to understand language\, vision\, sound\, and the world beyond these m odalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 outstanding paper award. His work has appeared in several me dia outlets\, including Wired\, the Washington Post\, and the New York Tim es. In the past\, he graduated from Harvey Mudd College with a B.S. in Com puter Science & Mathematics\, and has interned at the Allen Institute for AI. DTSTART;TZID=America/New_York:20220214T120000 DTEND;TZID=America/New_York:20220214T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rowan Zellers (University of Washington) ” Grounding Language by Se eing\, Hearing\, and Interacting” URL:https://www.clsp.jhu.edu/events/rowan-zellers-university-of-washington- grounding-language-by-seeing-hearing-and-interacting/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAs humans\, our understanding of language is grounded
in a rich mental model about “how the world works” – that we learn throug
h perception and interaction. We use this understanding to reason beyond w
hat we literally observe or read\, imagining how situations might unfold i
n the world. Machines today struggle at this kind of reasoning\, which lim
its how they can communicate with humans.
In my talk\, I will discuss three lines of work to bridge
this gap between machines and humans. I will first discuss how we might m
easure grounded understanding. I will introduce a suite of approaches for
constructing benchmarks\, using machines in the loop to filter out spuriou
s biases. Next\, I will introduce PIGLeT: a model that learns physical com
monsense understanding by interacting with the world through simulation\,
using this knowledge to ground language. From an English-language descript
ion of an event\, PIGLeT can anticipate how the world state might change –
outperforming text-only models that are orders of magnitude larger. Final
ly\, I will introduce MERLOT\, which learns about situations in the world
by watching millions of YouTube videos with transcribed speech. Through tr
aining objectives inspired by the developmental psychology idea of multimo
dal reentry\, MERLOT learns to fuse language\, vision\, and sound together
into powerful representations.
Together\, these directions suggest a path forward for building mac
hines that learn language rooted in the world.
Biography strong>
\nRowan Zellers is a final year PhD candidate at the Univers ity of Washington in Computer Science & Engineering\, advised by Yejin Cho i and Ali Farhadi. His research focuses on enabling machines to understand language\, vision\, sound\, and the world beyond these modalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 out standing paper award. His work has appeared in several media outlets\, inc luding Wired\, the Washington Post\, and the New York Times. In the past\, he graduated from Harvey Mudd College with a B.S. in Computer Science & M athematics\, and has interned at the Allen Institute for AI.
\n< /HTML> X-TAGS;LANGUAGE=en-US:2022\,February\,Zellers END:VEVENT BEGIN:VEVENT UID:ai1ec-21280@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAs AI-driven language interfaces (such as chat-bots) become more integrated into our lives\, they need to become more versatile and reliable in their communication with human users. How can we make pro gress toward building more “general” models that are capable of understand ing a broader spectrum of language commands\, given practical constraints such as the limited availability of labeled data?\nIn this talk\, I will d escribe my research toward addressing this question along two dimensions o f generality. First I will discuss progress in “breadth” — models that add ress a wider variety of tasks and abilities\, drawing inspiration from exi sting statistical learning techniques such as multi-task learning. In part icular\, I will showcase a system that works well on several QA benchmarks \, resulting in state-of-the-art results on 10 benchmarks. Furthermore\, I will show its extension to tasks beyond QA (such as text generation or cl assification) that can be “defined” via natural language. In the second p art\, I will focus on progress in “depth” — models that can handle complex inputs such as compositional questions. I will introduce Text Modular Net works\, a general framework that casts problem-solving as natural language communication among simpler “modules.” Applying this framework to composi tional questions by leveraging discrete optimization and existing non-comp ositional closed-box QA models results in a model with strong empirical pe rformance on multiple complex QA benchmarks while providing human-readable reasoning.\nI will conclude with future research directions toward broade r NLP systems by addressing the limitations of the presented ideas and oth er missing elements needed to move toward more general-purpose interactive language understanding systems.\nBiography\nDaniel Khashabi is a postdoct oral researcher at the Allen Institute for Artificial Intelligence (AI2)\, Seattle. Previously\, he completed his Ph.D. in Computer and Information Sciences at the University of Pennsylvania in 2019. His interests lie at t he intersection of artificial intelligence and natural language processing \, with a vision toward more general systems through unified algorithms an d theories. DTSTART;TZID=America/New_York:20220218T120000 DTEND;TZID=America/New_York:20220218T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Daniel Khashabi (Allen Institute for Artificial Intelligence) “The Quest Toward Generality in Natural Language Understanding” URL:https://www.clsp.jhu.edu/events/daniel-khashabi-allen-institute-for-art ificial-intelligence/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAs AI-driven language interfaces (such as c hat-bots) become more integrated into our lives\, they need to become more versatile and reliable in their communication with human users. How can w e make progress toward building more “general” models that are capable of understanding a broader spectrum of language commands\, given practical co nstraints such as the limited availability of labeled data?
\nIn this talk\, I will describe my research toward addressing this ques tion along two dimensions of generality. First I will discuss progress in “breadth” — models that address a wider variety of tasks and abilities\, d rawing inspiration from existing statistical learning techniques such as m ulti-task learning. In particular\, I will showcase a system that works we ll on several QA benchmarks\, resulting in state-of-the-art results on 10 benchmarks. Furthermore\, I will show its extension to tasks beyond QA (su ch as text generation or classification) that can be “defined” via natural language. In the second part\, I will focus on progress in “depth” — mod els that can handle complex inputs such as compositional questions. I will introduce Text Modular Networks\, a general framework that casts problem- solving as natural language communication among simpler “modules.” Applyin g this framework to compositional questions by leveraging discrete optimiz ation and existing non-compositional closed-box QA models results in a mod el with strong empirical performance on multiple complex QA benchmarks whi le providing human-readable reasoning.
\nI will conclude w ith future research directions toward broader NLP systems by addressing th e limitations of the presented ideas and other missing elements needed to move toward more general-purpose interactive language understanding system s.
\nBiography
\nDaniel Khashabi is a postdoctoral researcher at the Allen Institute for Artificia l Intelligence (AI2)\, Seattle. Previously\, he completed his Ph.D. in Com puter and Information Sciences at the University of Pennsylvania in 2019. His interests lie at the intersection of artificial intelligence and natur al language processing\, with a vision toward more general systems through unified algorithms and theories.
\n X-TAGS;LANGUAGE=en-US:2022\,February\,Khashabi END:VEVENT BEGIN:VEVENT UID:ai1ec-21487@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nEnormous amounts of ever-changing knowledge are avai lable online in diverse textual styles and diverse formats. Recent advance s in deep learning algorithms and large-scale datasets are spurring progre ss in many Natural Language Processing (NLP) tasks\, including question an swering. Nevertheless\, these models cannot scale up when task-annotated t raining data are scarce. This talk presents my lab’s work toward building general-purpose models in NLP and how to systematically evaluate them. Fir st\, I present a general model for two known tasks of question answering i n English and multiple languages that are robust to small domain shifts. Then\, I show a meta-training approach that can solve a variety of NLP tas ks with only using a few examples and introduce a benchmark to evaluate cr oss-task generalization. Finally\, I discuss neuro-symbolic approaches to address more complex tasks by eliciting knowledge from structured data and language models.\n\nBiography\n\nHanna Hajishirzi is an Assistant Profess or in the Paul G. Allen School of Computer Science & Engineering at the Un iversity of Washington and a Senior Research Manager at the Allen Institut e for AI. Her research spans different areas in NLP and AI\, focusing on d eveloping general-purpose machine learning algorithms that can solve many NLP tasks. Applications for these algorithms include question answering\, representation learning\, green AI\, knowledge extraction\, and conversati onal dialogue. Honors include the NSF CAREER Award\, Sloan Fellowship\, Al len Distinguished Investigator Award\, Intel rising star award\, best pape r and honorable mention awards\, and several industry research faculty awa rds. Hanna received her PhD from University of Illinois and spent a year a s a postdoc at Disney Research and CMU. DTSTART;TZID=America/New_York:20220225T120000 DTEND;TZID=America/New_York:20220225T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 SEQUENCE:0 SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI ) “Toward Robust\, Knowledge-Rich NLP” URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nSince it is increasingly harder to opt out from inter acting with AI technology\, people demand that AI is capable of maintainin g contracts such that it supports agency and oversight of people who are r equired to use it or who are affected by it. To help those people create a mental model about how to interact with AI systems\, I extend the underly ing models to self-explain—predict the label/answer and explain this predi ction. In this talk\, I will present how to generate (1) free-text explana tions given in plain English that immediately tell users the gist of the r easoning\, and (2) contrastive explanations that help users understand how they could change the text to get another label.
\nBiograph y
\nAna Marasović is a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at University of Washington. Her research interests broadly l ie in the fields of natural language processing\, explainable AI\, and vis ion-and-language learning. Her projects are motivated by a unified goal: i mprove interaction and control of the NLP systems to help people make thes e systems do what they want with the confidence that they’re getting exact ly what they need. Prior to joining AI2\, Ana obtained her PhD from Heidel berg University.
\nHow to pronounce my name: the first name i s Ana like in Spanish\, i.e.\, with a long “a” like in “water”\; regarding the last name: “mara” as in actress mara wilson + “so” + “veetch”.
\n< /BODY> X-TAGS;LANGUAGE=en-US:2022\,February\,Marasovic END:VEVENT BEGIN:VEVENT UID:ai1ec-21494@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost im perceptible to humans\, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems\, making it vital to propose countermeasures against them. In this talk\, we focus on 1) cl assification of a given adversarial attack into attack algorithm type\, th reat model type\, and signal-to-adversarial-noise ratios\, 2) developing a novel speech denoising solution to further improve the classification per formance. \nOur proposed approach uses an x-vector network as a signature extractor to get embeddings\, which we call signatures. These signatures c ontain information about the attack and can help classify different attack algorithms\, threat models\, and signal-to-adversarial-noise ratios. We d emonstrate the transferability of such signatures to other tasks. In parti cular\, a signature extractor trained to classify attacks against speaker identification can also be used to classify attacks against speaker verifi cation and speech recognition. We also show that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly \, we propose to improve the signature extractor by making the job of the signature extractor easier by removing the clean signal from the adversari al example (which consists of clean signal+perturbation). We train our sig nature extractor using adversarial perturbation. At inference time\, we us e a time-domain denoiser to obtain adversarial perturbation from adversari al examples. Using our improved approach\, we show that common attacks in the literature (Fast Gradient Sign Method (FGSM)\, Projected Gradient Desc ent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks with an equal error rate (EER) of a bout 9%\, which is very promising. DTSTART;TZID=America/New_York:20220304T120000 DTEND;TZID=America/New_York:20220304T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Sonal Joshi “Classify and Detect Adversarial Atta cks Against Speaker and Speech Recognition Systems” URL:https://www.clsp.jhu.edu/events/student-seminar-sonal-joshi/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost imperceptible to humans\, these attacks pose a severe security thr eat to the state-of-the-art speech and speaker recognition systems\, makin g it vital to propose countermeasures against them. In this talk\, we focu s on 1) classification of a given adversarial attack into attack algorithm type\, threat model type\, and signal-to-adversarial-noise ratios\, 2) de veloping a novel speech denoising solution to further improve the classifi cation performance.
\nOur proposed approach uses a n x-vector network as a signature extractor to get embeddings\, which we c all signatures. These signatures contain information about the attack and can help classify different attack algorithms\, threat models\, and signal -to-adversarial-noise ratios. We demonstrate the transferability of such s ignatures to other tasks. In particular\, a signature extractor trained to classify attacks against speaker identification can also be used to class ify attacks against speaker verification and speech recognition. We also s how that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly\, we propose to improve the signature e xtractor by making the job of the signature extractor easier by removing t he clean signal from the adversarial example (which consists of clean sign al+perturbation). We train our signature extractor using adversarial pertu rbation. At inference time\, we use a time-domain denoiser to obtain adver sarial perturbation from adversarial examples. Using our improved approach \, we show that common attacks in the literature (Fast Gradient Sign Metho d (FGSM)\, Projected Gradient Descent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks w ith an equal error rate (EER) of about 9%\, which is very promising.
\n X-TAGS;LANGUAGE=en-US:2022\,Joshi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21615@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\nWe consider a problem of data collection for sema ntically rich NLU tasks\, where detailed semantics of documents (or uttera nces) are captured using a complex meaning representation. Previously\, d ata collection for such tasks was either handled at the cost of extensive annotator training (e.g. in FrameNet or PropBank) or simplified meaning re presentation (e.g. in QA-SRL or Overnight). In this talk\, we present two systems [1\, 2] that aim to support fast\, accurate\, and expressive sema ntic annotations by pairing human workers with a trained model in the loop .\n\nThe first system\, called Guided K-best [1]\, is an annotation toolki t for conversational semantic parsing. Instead of typing annotations from scratch\, data specialists choose a correct parse from the K-best output of a few-shot prototyped model. As the K-best list can be large (e.g. K=1 00)\, we guide the annotators’ exploration of the K-best list via explaina ble hierarchical clustering. In addition\, we experiment with RoBERTa-bas ed reranking of the K-best list to recalibrate the few-shot model towards Accuracy@K. The final system allows to annotate data up to 35% faster tha n the standard\, non-guided K-best and improves the few-shot model’s top-1 accuracy by up to 18%. The second system\, called SchemaBlocks [2]\, is an annotation toolkit for schemas\, or structured descriptions of frequent real-world scenarios (e.g.\, cooking a meal). It represents schemas in t he annotation UI as nested blocks. Using a novel Causal ARM model\, we fu rther speed up the annotation process and guide data specialists towards e xpressive and diverse schemas. As part of this work\, we collect 232 sche mas\, evaluating their internal coherence and their coverage on large-scal e newswire corpora.\n\n\n DTSTART;TZID=America/New_York:20220311T120000 DTEND;TZID=America/New_York:20220311T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on Collecting Semantic Annotations” URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for -human-ai-cooperation-on-collecting-semantic-annotations/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\n\n X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.\nBiography\nAlane Su hr is a PhD Candidate in the Department of Computer Science at Cornell Uni versity\, advised by Yoav Artzi. Her research spans natural language proc essing\, machine learning\, and computer vision\, with a focus on building systems that participate and continually learn in situated natural langua ge interactions with human users. Alane’s work has been recognized by pape r awards at ACL and NAACL\, and has been supported by fellowships and gran ts\, including an NSF Graduate Research Fellowship\, a Facebook PhD Fellow ship\, and research awards from AI2\, ParlAI\, and AWS. Alane has also co- organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as an Eminence Fellow at the Ohio State University. DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.
\nBiog raphy
\nAlane Suhr is a PhD Candidate in the Department of Computer Science at Cornell University\, advised by Yoav Artzi. Her resea rch spans natural language processing\, machine learning\, and computer vi sion\, with a focus on building systems that participate and continually l earn in situated natural language interactions with human users. Alane’s w ork has been recognized by paper awards at ACL and NAACL\, and has been su pported by fellowships and grants\, including an NSF Graduate Research Fel lowship\, a Facebook PhD Fellowship\, and research awards from AI2\, ParlA I\, and AWS. Alane has also co-organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane receive d a BS in Computer Science and Engineering as an Eminence Fellow at the Oh io State University.
\n X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization. DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization.
\n X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21497@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.\nBiography\nKyle Gorman is an assistant professor of linguist ics at the Graduate Center\, City University of New York\, and director of the master’s program in computational linguistics\; he is also a software engineer in the speech and language algorithms group at Google. With Rich ard Sproat\, he is the coauthor of Finite-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processin g library for Python. He has also published on statistical methods for com paring computational models\, text normalization\, grapheme-to-phoneme con version\, and morphological analysis\, as well as many topics in linguisti c theory. DTSTART;TZID=America/New_York:20220401T120000 DTEND;TZID=America/New_York:20220401T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Kyle Gorman (City University of New York) ” Weighted Finite-State T ransducers: The Later Years” URL:https://www.clsp.jhu.edu/events/kyle-gorman-city-university-of-new-york -weighted-finite-state-transducers-the-later-years/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23304@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.\nBiography\nAlexander “Sasha” Rush is an Asso ciate Professor at Cornell Tech. His work is at the intersection of natura l language processing and generative modeling with applications in text ge neration\, efficient inference\, and controllability. He has written sever al popular open-source software projects supporting NLP research and data science\, and works part-time as a researcher at Hugging Face. He is the s ecretary of ICLR and developed software used to run virtual conferences du ring COVID. His work has received paper and demo awards at major NLP\, vis ualization\, and hardware conferences\, an NSF Career Award\, and a Sloan Fellowship. He tweets and blogs\, mostly about coding and ML\, at @srush_n lp. DTSTART;TZID=America/New_York:20230203T120000 DTEND;TZID=America/New_York:20230203T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sasha Rush (Cornell University) “Pretraining Without Attention” URL:https://www.clsp.jhu.edu/events/sasha-rush-cornell-university/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nTransformers are essential to pretraining. As we appr oach 5 years of BERT\, the connection between attention as architecture an d transfer learning remains key to this central thread in NLP. Other archi tectures such as CNNs and RNNs have been used to replicate pretraining res ults\, but these either fail to reach the same accuracy or require supplem ental attention layers. This work revisits the semanal BERT result and con siders pretraining without attention. We consider replacing self-attention layers with recently developed approach for long-range sequence modeling and transformer architecture variants. Specifically\, inspired by recent p apers like the structured space space sequence model (S4)\, we use simple routing layers based on state-space models (SSM) and a bidirectional model architecture based on multiplicative gating. We discuss the results of th e proposed Bidirectional Gated SSM (BiGS) and present a range of analysis into its properties. Results show that architecture does seem to have a no table impact on downstream performance and a different inductive bias that is worth exploring further.
\nBiography
\nAbstr act
\nWhile large language models have advanced the state-o f-the-art in natural language processing\, these models are trained on lar ge-scale datasets\, which may include harmful information. Studies have sh own that as a result\, the models exhibit social biases and generate misin formation after training. In this talk\, I will discuss my work on analyzi ng and interpreting the risks of large language models across the areas of fairness\, trustworthiness\, and safety. I will first describe my researc h in the detection of dialect bias between African American English (AAE) vs. Standard American English (SAE). The second part investigates the trus tworthiness of models through the memorization and subsequent generation o f conspiracy theories. I will end my talk with recent work in AI safety re garding text that may lead to physical harm.
\nBiography
\nSharon is a 5th-year Ph.D. candidate at the University of Ca lifornia\, Santa Barbara\, where she is advised by Professor William Wang. Her research interests lie in natural language processing\, with a focus on Responsible AI. Sharon’s research spans the subareas of fairness\, trus tworthiness\, and safety\, with publications in ACL\, EMNLP\, WWW\, and LR EC. She has spent summers interning at AWS\, Meta\, and Pinterest. Sharon is a 2022 EECS Rising Star and a current recipient of the Amazon Alexa AI Fellowship for Responsible AI.
\n X-TAGS;LANGUAGE=en-US:2023\,February\,Levy END:VEVENT BEGIN:VEVENT UID:ai1ec-23308@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nBiases in datasets\, or unintentionally introduced sp urious cues\, are a common source of misspecification in machine learning. Performant models trained on such data can gender stereotype or be brittl e under distribution shift. In this talk\, we present several results in multimodal and question answering applications studying sources of dataset bias\, and several mitigation methods. We propose approaches where known dimensions of dataset bias are explicitly factored out of a model during learning\, without needing to modify data. Finally\, we ask whether datase t biases can be attributable to annotator behavior during annotation. Draw ing inspiration from work in psychology on cognitive biases\, we show cert ain behavioral patterns are highly indicative of the creation of problemat ic (but valid) data instances in question answering. We give evidence that many existing observations around how dataset bias propagates to models c an be attributed to data samples created by annotators we identify.\nBiogr aphy\nMark Yatskar is an Assistant Professor at University of Pennsylvania in the department of Computer and Information Science. He did his PhD at University of Washington co-advised by Luke Zettlemoyer and Ali Farhadi. H e was a Young Investigator at the Allen Institute for Artificial Intellige nce for several years working with their computer vision team\, Prior. His work spans Natural Language Processing\, Computer Vision\, and Fairness i n Machine Learning. He received a Best Paper Award at EMNLP for work on ge nder bias amplification\, and his work has been featured in Wired and the New York Times. DTSTART;TZID=America/New_York:20230210T120000 DTEND;TZID=America/New_York:20230210T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Yatskar (University of Pennsylvania) “Understanding Dataset Bi ases: Behavioral Indicators During Annotation and Contrastive Mitigations” URL:https://www.clsp.jhu.edu/events/mark-yatskar-university-of-pennsylvania / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nBiases in datasets\, or unintentionally introduced sp urious cues\, are a common source of misspecification in machine learning. Performant models trained on such data can gender stereotype or be brittl e under distribution shift. In this talk\, we present several results in multimodal and question answering applications studying sources of dataset bias\, and several mitigation methods. We propose approaches where known dimensions of dataset bias are explicitly factored out of a model during learning\, without needing to modify data. Finally\, we ask whether datase t biases can be attributable to annotator behavior during annotation. Draw ing inspiration from work in psychology on cognitive biases\, we show cert ain behavioral patterns are highly indicative of the creation of problemat ic (but valid) data instances in question answering. We give evidence that many existing observations around how dataset bias propagates to models c an be attributed to data samples created by annotators we identify.
\n< p>Biography\nMark Yatskar is an Assistan t Professor at University of Pennsylvania in the department of Computer an d Information Science. He did his PhD at University of Washington co-advis ed by Luke Zettlemoyer and Ali Farhadi. He was a Young Investigator at the Allen Institute for Artificial Intelligence for several years working wit h their computer vision team\, Prior. His work spans Natural Language Proc essing\, Computer Vision\, and Fairness in Machine Learning. He received a Best Paper Award at EMNLP for work on gender bias amplification\, and his work has been featured in Wired and the New York Times.
\n\n X-TAGS;LANGUAGE=en-US:2023\,February\,Yatskar END:VEVENT BEGIN:VEVENT UID:ai1ec-23314@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile GPT models have shown impressive performance on summarization and open-ended text generation\, it’s important to assess t heir abilities on more constrained text generation tasks that require sign ificant and diverse rewritings. In this talk\, I will discuss the challeng es of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) paraphrase generation and (ii) text simplifi cation. To address these challenges\, we introduce an interactive Rank-and -Rate evaluation framework. Our results show that GPT-3.5 has made a major step up from fine-tuned T5 in paraphrase generation\, but still lacks the diversity and creativity of humans who spontaneously produce large quanti ties of paraphrases.\nAdditionally\, we demonstrate that GPT-3.5 performs similarly to a single human in text simplification\, which makes it diffic ult for existing automatic evaluation metrics to distinguish between the t wo. To overcome this shortcoming\, we propose LENS\, a learnable evaluatio n metric that outperforms SARI\, BERTScore\, and other existing methods in both automatic evaluation and minimal risk decoding for text generation. \nBiography\nWei Xu is an assistant professor in the School of Interactive Computing at the Georgia Institute of Technology\, where she is also affi liated with the new NSF AI CARING Institute and Machine Learning Center. S he received her Ph.D. in Computer Science from New York University and her B.S. and M.S. from Tsinghua University. Xu’s research interests are in na tural language processing\, machine learning\, and social media\, with a f ocus on text generation\, stylistics\, robustness and controllability of m achine learning models\, and reading and writing assistive technology. She is a recipient of the NSF CAREER Award\, CrowdFlower AI for Everyone Awar d\, Criteo Faculty Research Award\, and Best Paper Award at COLING’18. She has also received funds from DARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/NLP conferences. DTSTART;TZID=America/New_York:20230224T120000 DTEND;TZID=America/New_York:20230224T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Wei Xu (Georgia Tech) “GPT-3 vs Humans: Rethinking Evaluation of Na tural Language Generation” URL:https://www.clsp.jhu.edu/events/wei-xu-georgia-tech/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nWhile GPT mo dels have shown impressive performance on summarization and open-ended tex t generation\, it’s important to assess their abilities on more constraine d text generation tasks that require significant and diverse rewritings. I n this talk\, I will discuss the challenges of evaluating systems that are highly competitive and perform close to humans on two such tasks: (i) par aphrase generation and (ii) text simplification. To address these challeng es\, we introduce an interactive Rank-and-Rate evaluation framework. Our r esults show that GPT-3.5 has made a major step up from fine-tuned T5 in pa raphrase generation\, but still lacks the diversity and creativity of huma ns who spontaneously produce large quantities of paraphrases.
\nAdditionally\, we demon strate that GPT-3.5 performs similarly to a single human in text simplific ation\, which makes it difficult for existing automatic evaluation metrics to distinguish between the two. To overcome this shortcoming\, we propose LENS\, a learnable evaluation metric that outperforms SARI\, BERTScore\, and other existing methods in both automatic evaluation and minimal risk d ecoding for text generation.
\nBiography
\nWei Xu is an assis tant professor in the School of Interactive Computing at the Georgia Insti tute of Technology\, where she is also affiliated with the new NSF AI CARI NG Institute and Machine Learning Center. She received her Ph.D. in Comput er Science from New York University and her B.S. and M.S. from Tsinghua Un iversity. Xu’s research interests are in natural language processing\, mac hine learning\, and social media\, with a focus on text generation\, styli stics\, robustness and controllability of machine learning models\, and re ading and writing assistive technology. She is a recipient of the NSF CARE ER Award\, CrowdFlower AI for Everyone Award\, Criteo Faculty Research Awa rd\, and Best Paper Award at COLING’18. She has also received funds from D ARPA and IARPA. She is an elected member of the NAACL executive board and regularly serves as a senior area chair for AI/NLP conferences.
\n X-TAGS;LANGUAGE=en-US:2023\,February\,Xu END:VEVENT BEGIN:VEVENT UID:ai1ec-23316@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nUnderstanding the implications underlying a text is c ritical to assessing its impact\, in particular the social dynamics that m ay result from a reading of the text. This requires endowing artificial in telligence (AI) systems with pragmatic reasoning\, for example to correctl y conclude that the statement “Epidemics and cases of disease in the 21st century are “staged”” relates to unfounded conspiracy theories. In this ta lk\, I discuss how shortcomings in the ability of current AI systems to re ason about pragmatics present challenges to equitable detection of false o r harmful language. I demonstrate how these shortcomings can be addressed by imposing human-interpretable structure on deep learning architectures u sing insights from linguistics.In the first part of the talk\, I describe how adversarial text generation algorithms can be used to improve robustne ss of content moderation systems. I then introduce a pragmatic formalism f or reasoning about harmful implications conveyed by social media text. I s how how this pragmatic approach can be combined with generative neural lan guage models to uncover implications of news headlines. I also address the bottleneck to progress in text generation posed by gaps in evaluation of factuality. I conclude by showing how context-aware content moderation can be used to ensure safe interactions with conversational agents.\n \nBiogr aphy\nSaadia Gabriel is a PhD candidate in the Paul G. Allen School of Com puter Science & Engineering at the University of Washington\, advised by P rof. Yejin Choi and Prof. Franziska Roesner. Her researchrevolves around n atural language processing and machine learning\, with a particular focus on building systems for understanding how social commonsense manifests in text (i.e. how do people typically behave in social scenarios)\, as well a s mitigating spread of false or harmful text (e.g. Covid-19 misinformation ). Her work has been covered by a wide range of media outlets like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nominatio n\, a 2019 IROS RoboCup best paper nomination and won a best paper award a t the 2020 WeCNLP summit. Prior to her PhD\, Saadia received a BA summa cu m laude from Mount Holyoke College in Computer Science and Mathematics.\n DTSTART;TZID=America/New_York:20230227T120000 DTEND;TZID=America/New_York:20230227T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Saadia Gabriel (University of Washington) “Socially Responsible and Factual Reasoning for Equitable AI Systems” URL:https://www.clsp.jhu.edu/events/saadia-gabriel-university-of-washington -socially-responsible-and-factual-reasoning-for-equitable-ai-systems/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nUnderstanding the implications underlying a text is c
ritical to assessing its impact\, in particular the social dynamics that m
ay result from a reading of the text. This requires endowing artificial in
telligence (AI) systems with pragmatic reasoning\, for example to correctl
y conclude that the statement “Epidemics and cases of disease in the 21st
century are “staged”” relates to unfounded conspiracy theories. In this ta
lk\, I discuss how shortcomings in the ability of current AI systems to re
ason about pragmatics present challenges to equitable detection of false o
r harmful language. I demonstrate how these shortcomings can be addressed
by imposing human-interpretable structure on deep learning architectures u
sing insights from linguistics.
In the first part of the talk\, I describe how adversarial text gen
eration algorithms can be used to improve robustness of content moderation
systems. I then introduce a pragmatic formalism for reasoning about harmf
ul implications conveyed by social media text. I show how this pragmatic a
pproach can be combined with generative neural language models to uncover
implications of news headlines. I also address the bottleneck to progress
in text generation posed by gaps in evaluation of factuality. I conclude b
y showing how context-aware content moderation can be used to ensure safe
interactions with conversational agents.
\n
Biography
\nSaadia Gabr iel is a PhD candidate in the Paul G. Allen School of Computer Scie nce & Engineering at the University of Washington\, advised by Prof. Yejin Choi and Prof. Franziska Roesner. Her research re volves around natural language processing and machine learning\, with a pa rticular focus on building systems for understanding how social commonsens e manifests in text (i.e. how do people typically behave in social scenari os)\, as well as mitigating spread of false or harmful text (e.g. Covid-19 misinformation). Her work has been covered by a wide range of media outle ts like Forbes and TechCrunch. It has also received a 2019 ACL best short paper nomination\, a 2019 IROS RoboCup best paper nomination and won a bes t paper award at the 2020 WeCNLP summit. Prior to her PhD\, Saadia received a BA summa cum laude from Mount Holyoke College in Computer Sc ience and Mathematics.
\n\n X-TAGS;LANGUAGE=en-US:2023\,February\,Gabriel END:VEVENT BEGIN:VEVENT UID:ai1ec-23320@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSpeech communications represents a core domain for ed ucation\, team problem solving\, social engagement\, and business interact ions. The ability for Speech Technology to extract layers of knowledge and assess engagement content represents the next generation of advanced spee ch solutions. Today\, the emergence of BIG DATA\, Machine Learning\, as we ll as voice enabled speech systems have required the need for effective vo ice capture and automatic speech/speaker recognition. The ability to emplo y speech and language technology to assess human-to-human interactions off ers new research paradigms having profound impact on assessing human inter action. In this talk\, we will focus on big data naturalistic audio proces sing relating to (i) child learning spaces\, and (ii) the NASA APOLLO luna r missions. ML based technology advancements include automatic audio diari zation\, speech recognition\, and speaker recognition. Child-Teacher based assessment of conversational interactions are explored\, including keywor d and “WH-word” (e.g.\, who\, what\, etc.). Diarization processing solutio ns are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 cor pus\, resulting in a massive multi-track audio processing challenge to mak e available 150\,000hrs of Apollo mission data to be shared with science c ommunities: (i) speech/language technology\, (ii) STEM/science and team-ba sed researchers\, and (iii) education/historical/archiving specialists. Ou r goals here are to provide resources which allow to better understand how people work/learn collaboratively together. For Apollo\, to accomplish on e of mankind’s greatest scientific/technological challenges in the last ce ntury.\nBiography\nJohn H.L. Hansen\, received Ph.D. & M.S. degrees from G eorgia Institute of Technology\, and B.S.E.E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 2005\, where he currently serves a s Associate Dean for Research\, Prof. of ECE\, Distinguished Univ. Chair i n Telecom. Engineering\, and directs Center for Robust Speech Systems (CRS S). He is an ISCA Fellow\, IEEE Fellow\, and has served as Member and TC-C hair of IEEE Signal Proc. Society\, Speech & Language Proc. Tech. Comm.(SL TC)\, and Technical Advisor to U.S. Delegate for NATO (IST/TG-01). He serv ed as ISCA President (2017-21)\, continues to serve on ISCA Board (2015-23 ) as Treasurer\, has supervised 99 PhD/MS thesis candidates (EE\,CE\,BME\, TE\,CS\,Ling.\,Cog.Sci.\,Spch.Sci.\,Hear.Sci)\, was recipient of 2020 UT-D allas Provost’s Award for Grad. PhD Research Mentoring\; author/co-author of 865 journal/conference papers including 14 textbooks in the field of sp eech/language/hearing processing & technology including coauthor of textbo ok Discrete-Time Processing of Speech Signals\, (IEEE Press\, 2000)\, and lead author of the report “The Impact of Speech Under ‘Stress’ on Military Speech Technology\,” (NATO RTO-TR-10\, 2000). He served as Organizer\, Ch air/Co-Chair/Tech.Chair for ISCA INTERSPEECH-2022\, IEEE ICASSP-2010\, IEE E SLT-2014\, ISCA INTERSPEECH-2002\, and Tech. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processing Society Leo Beranek MERITORIO US SERVICE Award.\n DTSTART;TZID=America/New_York:20230303T120000 DTEND;TZID=America/New_York:20230303T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:John Hansen (University of Texas at Dallas) “Challenges and Advance ments in Speaker Diarization & Recognition for Naturalistic Data Streams” URL:https://www.clsp.jhu.edu/events/john-hansen-university-of-texas-at-dall as/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nSpeech communications represents a core domain for ed ucation\, team problem solving\, social engagement\, and business interact ions. The ability for Speech Technology to extract layers of knowledge and assess engagement content represents the next generation of advanced spee ch solutions. Today\, the emergence of BIG DATA\, Machine Learning\, as we ll as voice enabled speech systems have required the need for effective vo ice capture and automatic speech/speaker recognition. The ability to emplo y speech and language technology to assess human-to-human interactions off ers new research paradigms having profound impact on assessing human inter action. In this talk\, we will focus on big data naturalistic audio proces sing relating to (i) child learning spaces\, and (ii) the NASA APOLLO luna r missions. ML based technology advancements include automatic audio diari zation\, speech recognition\, and speaker recognition. Child-Teacher based assessment of conversational interactions are explored\, including keywor d and “WH-word” (e.g.\, who\, what\, etc.). Diarization processing solutio ns are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 cor pus\, resulting in a massive multi-track audio processing challenge to mak e available 150\,000hrs of Apollo mission data to be shared with science c ommunities: (i) speech/language technology\, (ii) STEM/science and team-ba sed researchers\, and (iii) education/historical/archiving specialists. Ou r goals here are to provide resources which allow to better understand how people work/learn collaboratively together. For Apollo\, to accomplish on e of mankind’s greatest scientific/technological challenges in the last ce ntury.
\nBiography
\nJohn H.L. Hansen\, recei ved Ph.D. & M.S. degrees from Georgia Institute of Technology\, and B.S.E. E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 200 5\, where he currently serves as Associate Dean for Research\, Prof. of EC E\, Distinguished Univ. Chair in Telecom. Engineering\, and directs Center for Robust Speech Systems (CRSS). He is an ISCA Fellow\, IEEE Fellow\, an d has served as Member and TC-Chair of IEEE Signal Proc. Society\, Speech & Language Proc. Tech. Comm.(SLTC)\, and Technical Advisor to U.S. Delegat e for NATO (IST/TG-01). He served as ISCA President (2017-21)\, continues to serve on ISCA Board (2015-23) as Treasurer\, has supervised 99 PhD/MS t hesis candidates (EE\,CE\,BME\,TE\,CS\,Ling.\,Cog.Sci.\,Spch.Sci.\,Hear.Sc i)\, was recipient of 2020 UT-Dallas Provost’s Award for Grad. PhD Researc h Mentoring\; author/co-author of 865 journal/conference papers including 14 textbooks in the field of speech/language/hearing processing & technolo gy including coauthor of textbook Discrete-Time Processing of Speech Signa ls\, (IEEE Press\, 2000)\, and lead author of the report “The Impact of Sp eech Under ‘Stress’ on Military Speech Technology\,” (NATO RTO-TR-10\, 200 0). He served as Organizer\, Chair/Co-Chair/Tech.Chair for ISCA INTERSPEEC H-2022\, IEEE ICASSP-2010\, IEEE SLT-2014\, ISCA INTERSPEECH-2002\, and Te ch. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processin g Society Leo Beranek MERITORIOUS SERVICE Award.
\n\n X-TAGS;LANGUAGE=en-US:2023\,Hansen\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23439@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAs data-based technologies proliferate\, it is increa singly important for researchers to be aware of their work’s wider impact. Concerns like navigating the IRB and figuring out copyright and licensing issues are still key\, but the current focus shift to matters like inclus ivity\, fairness\, and transparency and their impact on the research/devel opment life cycle have added complexity to the research task. In this talk \, we will take a broad look at the various ways ethics intersects with na tural language processing\, machine learning\, and artificial intelligence research and discuss strategies and resources for managing these concerns within the broader research framework.\nBiography\nDenise is responsible for the overall operation of LDC’s External Relations group which includes intellectual property management\, licensing\, regulatory matters\, publi cations\, membership and communications. Before joining LDC\, she practice d law for over 20 years in the areas of international trade\, intellectual property and commercial litigation. She has an A.B. in Political Science from Bryn Mawr College and a Juris Doctor degree from the University of Mi ami School of Law. DTSTART;TZID=America/New_York:20230310T120000 DTEND;TZID=America/New_York:20230310T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street SEQUENCE:0 SUMMARY:Denise DiPersio (Linguistic Data Consortium\, University of Pennsyl vania) “Data and Ethics: Where Does the Twain Meet?” URL:https://www.clsp.jhu.edu/events/denise-dipersio-linguistic-data-consort ium-university-of-pennsylvania-data-and-ethics-where-does-the-twain-meet/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nAs data-based technologies proliferate\, it is increa singly important for researchers to be aware of their work’s wider impact. Concerns like navigating the IRB and figuring out copyright and licensing issues are still key\, but the current focus shift to matters like inclus ivity\, fairness\, and transparency and their impact on the research/devel opment life cycle have added complexity to the research task. In this talk \, we will take a broad look at the various ways ethics intersects with na tural language processing\, machine learning\, and artificial intelligence research and discuss strategies and resources for managing these concerns within the broader research framework.
\nBiography
\nDenise is responsible for the overall operation of LDC’s External Relations group which includes intellectual property management\, licensi ng\, regulatory matters\, publications\, membership and communications. Be fore joining LDC\, she practiced law for over 20 years in the areas of int ernational trade\, intellectual property and commercial litigation. She ha s an A.B. in Political Science from Bryn Mawr College and a Juris Doctor d egree from the University of Miami School of Law.
\n X-TAGS;LANGUAGE=en-US:2023\,DiPersio\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23312@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAdvanced neural language models have grown ever large r and more complex\, pushing forward the limits of language understanding and generation\, while diminishing interpretability. The black-box nature of deep neural networks blocks humans from understanding them\, as well as trusting and using them in real-world applications. This talk will introd uce interpretation techniques that bridge the gap between humans and model s for developing trustworthy natural language processing(NLP). I will firs t show how to explain black-box models and evaluate their explanations for understanding their prediction behavior. Then I will introduce how to imp rove the interpretability of neural language models by making their decisi on-making transparent and rationalized. Finally\, I will discuss how to di agnose and improve models (e.g.\, robustness) through the lens of explanat ions. I will conclude with future research directions that are centered ar ound model interpretability and committed to facilitating communications a nd interactions between intelligent machines\, system developers\, and end users for long-term trustworthy AI.\nBiography\nHanjie Chen is a Ph.D. ca ndidate in Computer Science at the University of Virginia\, advised by Pro f. Yangfeng Ji. Her research interests lie in Trustworthy AI\, Natural Lan guage Processing (NLP)\, andInterpretable Machine Learning. She develops i nterpretation techniques to explain neural language models and make their prediction behavior transparent and reliable. She is a recipient of the Ca rlos and Esther Farrar Fellowship and the Best Poster Award at the ACM CAP WIC 2021. Her work has been published at top-tier NLP/AI conferences (e.g. \, ACL\, AAAI\, EMNLP\, NAACL) and selected by the National Center for Wom en & Information Technology (NCWIT) Collegiate Award Finalist 2021. She (a s the primary instructor) co-designed and taught the course\, Interpretabl e Machine Learning\, and was awarded the UVA CS Outstanding Graduate Teach ing Award and University-wide Graduate Teaching Awards Nominee (top 5% of graduate instructors). More details can be found athttps://www.cs.virginia .edu/~hc9mx DTSTART;TZID=America/New_York:20230313T120000 DTEND;TZID=America/New_York:20230313T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Hanjie Chen (University of Virginia) “Bridging Humans and Machines: Techniques for Trustworthy NLP” URL:https://www.clsp.jhu.edu/events/hanjie-chen-university-of-virginia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAdvanced neural language models have grown ever large
r and more complex\, pushing forward the limits of language understanding
and generation\, while diminishing interpretability. The black-box nature
of deep neural networks blocks humans from understanding them\, as well as
trusting and using them in real-world applications. This talk will introd
uce interpretation techniques that bridge the gap between humans and model
s for developing trustworthy natural language processing
(NLP). I will first show how to explain black-box models and evalua
te their explanations for understanding their prediction behavior. Then I
will introduce how to improve the interpretability of neural language mode
ls by making their decision-making transparent and rationalized. Finally\,
I will discuss how to diagnose and improve models (e.g.\, robustness) thr
ough the lens of explanations. I will conclude with future research direct
ions that are centered around model interpretability and committed to faci
litating communications and interactions between intelligent machines\, sy
stem developers\, and end users for long-term trustworthy AI.
Hanjie Chen is a Ph.D. candidate in Compute r Science at the University of Virginia\, advised by Prof. Yangfeng Ji. He r research interests lie in Trustworthy AI\, Natural Language Processing ( NLP)\, and
\n X-TAGS;LANGUAGE=en-US:2023\,Chen\,February END:VEVENT BEGIN:VEVENT UID:ai1ec-23505@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRecent advances in large pretrained language models h ave unlocked new exciting applications for Natural Language Generation for creative tasks\, such as lyrics or humour generation. In this talk we wil l discuss recent works by our team at Alexa AI and discuss current challen ges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun classification and gener ation tasks. (2) Song lyric generation: we design a hierarchical lyric gen eration framework that enables us to generate pleasantly-singable lyrics w ithout training on melody-lyric aligned data\, and show that our approach is competitive with strong baselines supervised on parallel data. (3) Crea te with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem wi th story visualization and background music generation models to produce m ultimodal stories for kids.\nBiography\nAlessandra Cervone is an Applied S cientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburg h and a PhD in CS from University of Trento (Italy). During her PhD\, Ales sandra worked on computational models of coherence in open-domain dialogue advised by Giuseppe Riccardi. In the first year of the PhD\, she was the team leader of one of the teams selected to compete in the first edition o f the Alexa Prize. More recently\, her research interests have been focuse d on natural language generation and its evaluation\, in particular in the context of creative AI applications. DTSTART;TZID=America/New_York:20230317T120000 DTEND;TZID=America/New_York:20230317T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alessandra Cervone (Amazon) “Controllable Text Generation for Creat ive Applications URL:https://www.clsp.jhu.edu/events/alexxandra-cervone-amazon/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n Interpretable Machine Learning. She dev elops interpretation techniques to explain neural language models and make their prediction behavior transparent and reliable. She is a recipient of the Carlos and Esther Farrar Fellowship and the Best Poster Award at the ACM CAPWIC 2021. Her work has been published at top-tier NLP/AI conference s (e.g.\, ACL\, AAAI\, EMNLP\, NAACL) and selected by the National Center for Women & Information Technology (NCWIT) Collegiate Award Finalist 2021. She (as the primary instructor) co-designed and taught the course\, Inter pretable Machine Learning\, and was awarded the UVA CS Outstanding Graduat e Teaching Award and University-wide Graduate Teaching Awards Nominee (top 5% of graduate instructors). More details can be found at https://www.cs.virginia.edu/~hc9mxAbstr act
\nRecent advances in large pretrain ed language models have unlocked new exciting applications for Natural Lan guage Generation for creative tasks\, such as lyrics or humour generation. In this talk we will discuss recent works by our team at Alexa AI and dis cuss current challenges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun clas sification and generation tasks. (2) Song lyric generation: we design a hi erarchical lyric generation framework that enables us to generate pleasant ly-singable lyrics without training on melody-lyric aligned data\, and sho w that our approach is competitive with strong baselines supervised on par allel data. (3) Create with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem with story visualization and background music generation models to produce multimodal stories for kids.
\nBiography< /strong>
\nAlessandra Cervone is an Applied Scientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburgh and a PhD in CS from University of Trento (Italy). During her PhD\, Alessandra worked on comput ational models of coherence in open-domain dialogue advised by Giuseppe Ri ccardi. In the first year of the PhD\, she was the team leader of one of t he teams selected to compete in the first edition of the Alexa Prize. More recently\, her research interests have been focused on natural language g eneration and its evaluation\, in particular in the context of creative AI applications.
\n \\nAbstr act
\nDespite many recent advances in automatic speech reco gnition (ASR)\, linguists and language communities engaged in language doc umentation projects continue to face the obstacle of the “transcription bo ttleneck”. Researchers in NLP typically do not distinguish between widely spoken languages that currently happen to have few training resources and endangered languages that will never have abundant data. As a result\, we often fail to thoroughly explore when ASR is helpful for language document ation\, what architectures work best for the sorts of languages that are i n need of documentation\, and how data can be collected and organized to p roduce optimal results. In this talk I describe several projects that atte mpt to bridge the gap between the promise of ASR for language documentatio n and the reality of using this technology in real-world settings.
\nBiography
\nAbstr act
\nOur research focuses on improving speech processing a lgorithms\, such as automatic speech recognition (ASR)\, speaker identific ation\, and depression detection\, under challenging conditions such as li mited data (for example\, children’s or clinical speech)\, mismatched cond itions (for example\, training on read speech while recognizing conversati onal speech)\, and noisy speech\, using a hybrid data-driven and knowledge -based approach. This approach requires understanding of both machine lear ning approaches and of the human speech production and perception systems. I will summarize in this talk our work on children’s ASR using self-super vised models\, detecting depression from speech signals using novel speake r disentaglement techniques\, and automating scoring of children’s reading tasks with both ASR and innovative NLP algorithms.
\nBiogra phy
\nAbeer Alwan received her Ph.D. in Electrical Engineer ing and Computer Science from MIT in 1992. Since then\, she has been with the ECE department at UCLA where she is now a Full Professor and directs t he Speech Processing and Auditory Perception Laboratory. She is the recipi ent of the NSF Research Initiation and Career Awards\, NIH FIRST Award\, U CLA-TRW Excellence in Teaching Award\, Okawa Foundation Award in Telecommu nication\, and the Engineer’s Council Educator Award. She is a Fellow of t he Acoustical Society of America\, IEEE\, and International Speech Communi cation Assoc. (ISCA). She was a Fellow at the Radcliffe Institute\, Harvar d University\, co-Editor in Chief of Speech Communication\, Associate Edit or of JASA and IEEE TSALP\, a Distinguished Lecturer of ISCA\, a member of the IEEE Signal Processing Board of Governers and she is currently on the advisory board of ISCA and the UCLA-Amazon Science Hub for Humanity and A I.
\n X-TAGS;LANGUAGE=en-US:2024\,Alwan\,February END:VEVENT BEGIN:VEVENT UID:ai1ec-24425@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\nOver the past three decades\, the fields of automat ic speech recognition (ASR) and machine translation (MT) have witnessed re markable advancements\, leading to exciting research directions such as sp eech-to-text translation (ST). This talk will delve into the domain of con versational ST\, an essential facet of daily communication\, which present s unique challenges including spontaneous informal language\, the presence of disfluencies\, high context dependence and a scarcity of ST paired dat a.\n\nConversational speech is notably characterized by its reliance on sh ort segments\, requiring the integration of broader contexts to maintain c onsistency and improve the translation’s fluency and quality. Incorporati ng longer contexts has been shown to benefit machine translation\, but the inclusion of context in E2E-ST remains under-studied. Previous approaches have used simple concatenation of audio inputs for context\, leading to m emory bottlenecks\, especially in self-attention networks\, due to the enc oding of lengthy audio segments.\n\nFirst\, I will describe how to integra te the context into E2E-ST with minimum additional memory cost. Then\, I will discuss the challenges of incorporating context in an E2E-ST system w ith limited data during training and inference and propose solutions to ov ercome them. Afterward\, I will illustrate the impact of context size and the inclusion of speaker information on performance. Lastly\, I will demon strate the benefits of context in conversational settings focusing on asp ects like anaphora resolution and the identification of named entities. DTSTART;TZID=America/New_York:20240205T120000 DTEND;TZID=America/New_York:20240205T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Amir Hussein “Towards End-to-End Conversational Speech Translation” URL:https://www.clsp.jhu.edu/events/amir-hussein-towards-end-to-end-convers ational-speech-translation/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr
act
\n
Over the past three decades\, the fields of automatic speech recognition (ASR) and machine tra nslation (MT) have witnessed remarkable advancements\, leading to exciting research directions such as speech-to-text translation (ST). This talk wi ll delve into the domain of conversational ST\, an essential facet of dail y communication\, which presents unique challenges including spontaneous i nformal language\, the presence of disfluencies\, high context dependence and a scarcity of ST paired data.
\n\nAbstr act
\nThere is an enormous data gap between how AI systems and children learn language: The best LLMs now learn language from text with a word count in the trillions\, whereas it would take a child roughly 100K years to reach those numbers through speec h (Frank\, 2023\, “Bridging the data gap”). There is also a clear generali zation gap: whereas machines struggle with systematic generalization\, peo ple excel. For instance\, once a child learns how to “skip\,” they immedia tely know how to “skip twice” or “skip around the room with their hands up ” due to their compositional skills. In this talk\, I’ll describe two case studies in addressing these gaps:
\n1) The dat a gap: We train deep neural networks from scratch (using DINO\, CLIP\, etc .)\, not on large-scale data from the web\, but through the eyes and ears of a single child. Using head-mounted video recordings from a child (61 ho urs of video slices over 19 months)\, we show how deep neural networks can acquire many word-referent mappings\, generalize to novel visual referent s\, and achieve multi-modal alignment. Our results demonstrate how today’s AI models are capable of learning key aspects of children’s early knowled ge from realistic input.
\n2) The generalizatio n gap: Can neural networks capture human-like systematic generalization? W e address a 35-year-old debate catalyzed by Fodor and Pylyshyn’s classic a rticle\, which argued that standard neural networks are not viable models of the mind because they lack systematic compositionality — the algebraic ability to understand and produce novel combinations from known components . We’ll show how neural network can achieve human-like systematic generali zation when trained through meta-learning for compositionality (MLC)\, a n ew method for optimizing the compositional skills of neural networks throu gh practice. With MLC\, a neural network can match human performance and s olve several machine learning benchmarks.
\nGiv en this work\, we’ll discuss the paths forward for building machines that learn\, generalize\, and interact in more human-like ways based on more na tural input.
\nRelated articles:
\nVong\, W. K.\, Wang\, W.\, Orhan\, A. E.\, and Lake\, B. M (2024). Grounded language acquisition through the eyes and ears of a singl e child. Science\, 383.
\nOrhan\, A. E.\ , and Lake\, B. M. (in press). Learning high-level visual representations from a child’s perspective without strong inductive biases. Nature Mach ine Intelligence.
\nLake\, B. M. and Baroni \, M. (2023). Human-like systematic generalization through a meta-learning neural network. Nature\, 623\, 115-121.
\nBiography< /strong>
\nBrenden M. Lake is an Assistant Prof essor of Psychology and Data Science at New York University. He received h is M.S. and B.S. in Symbolic Systems from Stanford University in 2009\, an d his Ph.D. in Cognitive Science from MIT in 2014. He was a postdoctoral D ata Science Fellow at NYU from 2014-2017. Brenden is a recipient of the Ro bert J. Glushko Prize for Outstanding Doctoral Dissertation in Cognitive S cience\, he is a MIT Technology Review Innovator Under 35\, and his resear ch was selected by Scientific American as one of the 10 most important adv ances of 2016. Brenden’s research focuses on computational problems that a re easier for people than they are for machines\, such as learning new con cepts\, creating new concepts\, learning-to-learn\, and asking questions.< /p>\n
\\nAbstr act
\nLarge language models like ChatGPT have shown extraor dinary abilities for writing. While impressive at first glance\, large lan guage models aren’t perfect and often make mistakes humans would not make. The main architecture behind ChatGPT mostly doesn’t differ from early neu ral networks\, and as a consequence\, carries some of the same limitations . My work revolves around the use of neural networks like ChatGPT mixed wi th symbolic methods from early AI and how these two families of methods ca n combine to create more robust AI. I talk about some of the neurosymbolic methods I used for applications in story generation and understanding — w ith the goal of eventually creating AI that can play Dungeons & Dragons. I also discuss pain points that I found for improving accessible communicat ion and show how large language models can supplement such communication.< /p>\n
Biography
\nAbstr act
\nWe introduce STAR (Stream Transduction with Anchor Re presentations)\, a novel Transformer-based model designed for efficient se quence-to-sequence transduction over streams. STAR dynamically segments in put streams to create compressed anchor representations\, achieving nearly lossless compression (12x) in Automatic Speech Recognition (ASR) and outp erforming existing methods. Moreover\, STAR demonstrates superior segmenta tion and latency-quality trade-offs in simultaneous speech-to-text tasks\, optimizing latency\, memory footprint\, and quality.
\n X-TAGS;LANGUAGE=en-US:2024\,February\,Tan END:VEVENT BEGIN:VEVENT UID:ai1ec-24429@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nI discuss the application of Foundation Models in Ast ronomy through the collaborative efforts of the UniverseTBD consortium wit h a mission to democratize Science for everyone. One of our key objectives is to overcome the limitations of general-purpose Foundation Models\, suc h as producing limited information in specialized fields. To this end\, we have developed the first specialized large language model for Astronomy\, AstroLLaMa-1. This model\, enhanced by exposure to domain-specific litera ture from the NASA Astrophysics Data System and ArXiv\, demonstrates impro ved text completion and embedding capabilities over existent GPT models. I further discuss the potential of LLMs in generating complex scientific hy potheses and extracting meaningful insights from astronomy literature. Our findings\, validated by human experts\, demonstrate the LLM capability in informed scientific critique and uncover intriguing patterns in the embed ding space\, highlighting the potential of LLMs to augment scientific inqu iry. I will also discuss preliminary work with the multi-modal model Astro LLaVA\, which allows us to interact with astronomical images via natural l anguage. Through the work of UniverseTBD\, we aim to explore how artificia l intelligence can assist human intelligence in Astronomy and\, more broad ly\, Science.\nBiography\nIoana Ciucă\, who goes by Jo\, is an interdiscip linary Jubilee Joint Fellow at the Australian National University\, workin g across the School of Computing and the Research School of Astronomy & As trophysics. Before joining ANU\, Jo finished her PhD in Astrophysics at Un iversity College London in the United Kingdom\, where she worked at the in tersection of Astronomy and Machine Learning to understand the formation a nd evolution history of our Galaxy\, the Milky Way. Jo is now focusing on utilizing foundation models that benefit researchers everywhere\, working alongside the UniverseTBD team of more than 30 astronomers\, engineers\, M L practitioners and enthusiasts worldwide.\n DTSTART;TZID=America/New_York:20240223T120000 DTEND;TZID=America/New_York:20240223T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Ioana Ciuca (Australian National University)”A Universe To Be Decid ed: Towards Specialized Foundation Models for Advancing Astronomy” URL:https://www.clsp.jhu.edu/events/ioana-ciuca-australian-national-univers ity/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nI discuss the application of Foundation Models in Ast ronomy through the collaborative efforts of the UniverseTBD consortium wit h a mission to democratize Science for everyone. One of our key objectives is to overcome the limitations of general-purpose Foundation Models\, suc h as producing limited information in specialized fields. To this end\, we have developed the first specialized large language model for Astronomy\, AstroLLaMa-1. This model\, enhanced by exposure to domain-specific litera ture from the NASA Astrophysics Data System and ArXiv\, demonstrates impro ved text completion and embedding capabilities over existent GPT models. I further discuss the potential of LLMs in generating complex scientific hy potheses and extracting meaningful insights from astronomy literature. Our findings\, validated by human experts\, demonstrate the LLM capability in informed scientific critique and uncover intriguing patterns in the embed ding space\, highlighting the potential of LLMs to augment scientific inqu iry. I will also discuss preliminary work with the multi-modal model Astro LLaVA\, which allows us to interact with astronomical images via natural l anguage. Through the work of UniverseTBD\, we aim to explore how artificia l intelligence can assist human intelligence in Astronomy and\, more broad ly\, Science.
\nBiography
\nIoana Ciucă\, who goes by Jo\, is an interdisciplinary Jubilee Joint Fellow at the Australi an National University\, working across the School of Computing and the Re search School of Astronomy & Astrophysics. Before joining ANU\, Jo finishe d her PhD in Astrophysics at University College London in the United Kingd om\, where she worked at the intersection of Astronomy and Machine Learnin g to understand the formation and evolution history of our Galaxy\, the Mi lky Way. Jo is now focusing on utilizing foundation models that benefit re searchers everywhere\, working alongside the UniverseTBD team of more than 30 astronomers\, engineers\, ML practitioners and enthusiasts worldwide.< /p>\n
\n X-TAGS;LANGUAGE=en-US:2024\,Ciuca\,February END:VEVENT BEGIN:VEVENT UID:ai1ec-24457@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nAs artificial intelligence (AI) continues to rapidly expand into existing healthcare infrastructure – e.g.\, clinical decision support\, administrative tasks\, and public health surveillance – it is pe rhaps more important than ever to reflect on the broader purpose of such s ystems. While much focus has been on the potential for this technology to improve general health outcomes\, there also exists a significant\, but un derstated\, opportunity to use this technology to address health-related d isparities. Accomplishing the latter depends not only on our ability to ef fectively identify addressable areas of systemic inequality and translate them into tasks that are machine learnable\, but also our ability to measu re\, interpret\, and counteract barriers in training data that may inhibit robustness to distribution shift upon deployment (i.e.\, new populations\ , temporal dynamics). In this talk\, we will discuss progress made along b oth of these dimensions. We will begin by providing background on the stat e of AI for promoting health equity. Then\, we will present results from a recent clinical phenotyping project and discuss their implication on prev ailing views regarding language model robustness in clinical applications. Finally\, we will showcase ongoing efforts to proactively address systemi c inequality in healthcare by identifying and characterizing stigmatizing language in medical records. DTSTART;TZID=America/New_York:20240226T120000 DTEND;TZID=America/New_York:20240226T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Keith Harrigian (JHU) “Fighting Bias From Bias: Robust Natural Lang uage Processing Techniques to Promote Health Equity” URL:https://www.clsp.jhu.edu/events/keith-harrigian-jhu-fighting-bias-from- bias-robust-natural-language-processing-techniques-to-promote-health-equit y/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nAs artificial intelligence (AI) continues to rapidly expand into existing healthcare infrastructure – e.g.\, clinical decision support\, administrative tasks\, and public health surveillance – it is pe rhaps more important than ever to reflect on the broader purpose of such s ystems. While much focus has been on the potential for this technology to improve general health outcomes\, there also exists a significant\, but un derstated\, opportunity to use this technology to address health-related d isparities. Accomplishing the latter depends not only on our ability to ef fectively identify addressable areas of systemic inequality and translate them into tasks that are machine learnable\, but also our ability to measu re\, interpret\, and counteract barriers in training data that may inhibit robustness to distribution shift upon deployment (i.e.\, new populations\ , temporal dynamics). In this talk\, we will discuss progress made along b oth of these dimensions. We will begin by providing background on the stat e of AI for promoting health equity. Then\, we will present results from a recent clinical phenotyping project and discuss their implication on prev ailing views regarding language model robustness in clinical applications. Finally\, we will showcase ongoing efforts to proactively address systemi c inequality in healthcare by identifying and characterizing stigmatizing language in medical records.
\n X-TAGS;LANGUAGE=en-US:2024\,February\,Harrigian END:VEVENT BEGIN:VEVENT UID:ai1ec-24459@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240301T120000 DTEND;TZID=America/New_York:20240301T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mohit Iyyer “Improving\, Evaluating and Detecting Long-Form LLM-Gen erated Text” URL:https://www.clsp.jhu.edu/events/mohit-iyyer-improving-evaluating-and-de tecting-long-form-llm-generated-text/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Iyyer\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24461@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena. DTSTART;TZID=America/New_York:20240304T120000 DTEND;TZID=America/New_York:20240304T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Wicks (JHU) “To Sentences and Beyond: Paving the Way for Con text-Aware Machine Translation” URL:https://www.clsp.jhu.edu/events/rachel-wicks-jhu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wicks END:VEVENT BEGIN:VEVENT UID:ai1ec-24465@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.\nBio\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on Large Language Models(LLM) and it s application to generative question-answering. She has worked on multilin gual instruction-tuning for Gemini and multilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD de gree in Language Technologies at Carnegie Mellon University. During her Ph D\, she mainly worked on developing data-efficient natural language proces sing~(NLP) systems. She has made several contributions in data selection\, data representation\, and model adaptation for multilingual NLP. DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on La rge Language Models(LLM) and its application to generative question-answer ing. She has worked on multilingual instruction-tuning for Gemini and mult ilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie M ellon University. During her PhD\, she mainly worked on developing data-ef ficient natural language processing~(NLP) systems. She has made several co ntributions in data selection\, data representation\, and model adaptation for multilingual NLP.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nThe speech field is evolving to solve more challengin g scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the many types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, a nd worked with any microphone array\, in both solo and multi-talker enviro nments. Our research enhances previous multichannel speech processing effo rts in four key areas: 1) Adaptability: Contrasting traditional models con strained to certain microphone array configurations\, our encoder is unive rsally compatible. 2) MultiTask Capability: Beyond the single-task focus o f previous systems\, UniX-Encoder acts as a robust upstream model\, adeptl y extracting features for diverse tasks including ASR and speaker recognit ion. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to mod els that first beamform then process single-channels\, our encoder offers an end-to-end solution\, bypassing explicit beamforming or separation. To validate its effectiveness\, we tested the UniXEncoder on a synthetic mult i-channel dataset from the LibriSpeech corpus. Across tasks like speech re cognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend. DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings wi th multiple simultaneous talkers. Given the many types of microphone setup s out there\, we present the UniX-Encoder. It’s a universal encoder design ed for multiple tasks\, and worked with any microphone array\, in both sol o and multi-talker environments. Our research enhances previous multichann el speech processing efforts in four key areas: 1) Adaptability: Contrasti ng traditional models constrained to certain microphone array configuratio ns\, our encoder is universally compatible. 2) MultiTask Capability: Beyon d the single-task focus of previous systems\, UniX-Encoder acts as a robus t upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integr ation: In contrast to models that first beamform then process single-chann els\, our encoder offers an end-to-end solution\, bypassing explicit beamf orming or separation. To validate its effectiveness\, we tested the UniXEn coder on a synthetic multi-channel dataset from the LibriSpeech corpus. Ac ross tasks like speech recognition and speaker diarization\, our encoder c onsistently outperformed combinations like the WavLM model with the Beamfo rmIt frontend.
\n X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.\nBiography\nEunsol Choi is an assistant profess or in the Computer Science department at the University of Texas at Austin . Prior to UT\, she spent a year at Google AI as a visiting researcher. He r research area spans natural language processing and machine learning. Sh e is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fel lowship\, Google faculty research award\, Sony faculty award\, and an outs tanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and comp uter science from Cornell University. DTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.
\nBiography
\nEunsol Choi is an assistant professor in the Computer Scie nce department at the University of Texas at Austin. Prior to UT\, she spe nt a year at Google AI as a visiting researcher. Her research area spans n atural language processing and machine learning. She is particularly inter ested in interpreting and reasoning about text in a dynamic real world con text. She is a recipient of a Facebook research fellowship\, Google facult y research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from Unive rsity of Washington and B.A in mathematics and computer science from Corne ll University.
\n\n X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24489@www.clsp.jhu.edu DTSTAMP:20240328T151845Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOver the past decade\, the field of Speech Generation has seen significant progress in enhancing speech quality and naturalness . Despite these advancements\, persistent challenges such as speech noise\ , limited high-quality data availability\, and the lack of robustness in s peech generation systems remain. Additionally\, the evaluation of speech p resents a significant obstacle for comprehensive assessment at scale. Conc urrently\, recent breakthroughs in Large Language Models (LLMs) have revol utionized text generation and natural language processing. However\, the c omplexity of spoken language introduces unique hurdles\, including managin g long speech waveform sequences. In this presentation\, I will explore re cent innovations in speech synthesis with spoken language modeling\, evalu ation for generative speech systems and high-fidelity speech enhancement. Finally\, I will discuss prospective avenues for future research aimed at addressing these challenges.\nBio\nSoumi Maiti is a postdoctoral researche r at Language Technologies Institute\, Carnegie Mellon University\, where she works on speech and language processing. Her research broadly focuses on building intelligent systems that can communicate with humans naturally . She earned a Ph.D. from the Graduate Center\, City University of New Yo rk (CUNY) with the Graduate Center Fellowship advised by Prof Michael Mand el. She earned her B.Tech. in Computer Science from the Indian Institute o f Engineering Science and Technology\, Shibpur. Previously\, she has worke d in the Text-To-Speech team at Apple. She has also worked at Google and I nteractions LLC as a student researcher and research intern. She has worke d as an adjunct lecturer at Brooklyn College\, CUNY\, for three years and served as a Math Fellow at Hunter College. She has served as session chair in ICASSP 2024\, ICASSP 2023\, SLT 2023 and others\, and area chair at EM NLP 2023.\n DTSTART;TZID=America/New_York:20240329T120000 DTEND;TZID=America/New_York:20240329T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Soumi Maiti (CMU) “Towards Robust Speech Generation” URL:https://www.clsp.jhu.edu/events/soumi-maiti/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOver the past decade\, the field of Speech Generation has seen significant progress in enhancing speech quality and naturalness . Despite these advancements\, persistent challenges such as speech noise\ , limited high-quality data availability\, and the lack of robustness in s peech generation systems remain. Additionally\, the evaluation of speech p resents a significant obstacle for comprehensive assessment at scale. Conc urrently\, recent breakthroughs in Large Language Models (LLMs) have revol utionized text generation and natural language processing. However\, the c omplexity of spoken language introduces unique hurdles\, including managin g long speech waveform sequences. In this presentation\, I will explore re cent innovations in speech synthesis with spoken language modeling\, evalu ation for generative speech systems and high-fidelity speech enhancement. Finally\, I will discuss prospective avenues for future research aimed at addressing these challenges.
\nBio
\nSoumi Ma iti is a postdoctoral researcher at Language Technologies Institute\, Carn egie Mellon University\, where she works on speech and language processing . Her research broadly focuses on building intelligent systems that can co mmunicate with humans naturally. She earned a Ph.D. from the Graduate Cen ter\, City University of New York (CUNY) with the Graduate Center Fellowsh ip advised by Prof Michael Mandel. She earned her B.Tech. in Computer Scie nce from the Indian Institute of Engineering Science and Technology\, Shib pur. Previously\, she has worked in the Text-To-Speech team at Apple. She has also worked at Google and Interactions LLC as a student researcher and research intern. She has worked as an adjunct lecturer at Brooklyn Colleg e\, CUNY\, for three years and served as a Math Fellow at Hunter College. She has served as session chair in ICASSP 2024\, ICASSP 2023\, SLT 2023 an d others\, and area chair at EMNLP 2023.
\n\n X-TAGS;LANGUAGE=en-US:2024\,Maiti\,March END:VEVENT END:VCALENDAR