BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20115@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).\nBiography\nJiri Mekyska is the head of the BDALab (Brain Diseases Analysis Laboratory) at the Brno University of Technology\, wher e he leads a multidisciplinary team of researchers (signal processing engi neers\, data scientists\, neurologists\, psychologists) with a special foc us on the development of new digital endpoints and digital biomarkers enab ling to better understand\, diagnose and monitor neurodegenerative (e.g. P arkinson’s disease) and neurodevelopmental (e.g. dysgraphia) diseases. DTSTART;TZID=America/New_York:20210329T120000 DTEND;TZID=America/New_York:20210329T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Jiri Mekyska (Brno University of Technology) “Data Science in Small Medical Data Sets: From Logistic Regression Towards Logistic Regression” URL:https://www.clsp.jhu.edu/events/jiri-mekyska-brno-university-of-technol ogy/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nData science in small medical datasets usually means doing precision guesswork on unreliable data provided by those with high e xpectations. The first part of this talk will focus on issues that data sc ientists and engineers have to address when working with this kind of data (e.g. unreliable labels\, the effect of confounding factors\, necessity o f clinical interpretability\, difficulties with fusing more data sets). Th e second part of the talk will include some real examples of this kind of data science in the field of neurology (prediction of motor deficits in Pa rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park inson’s disease dysgraphia utilising online handwriting\, exploring the Mo zart effect in epilepsy based on the music information retrieval) and psyc hology (assessment of graphomotor disabilities in children with developmen tal dysgraphia).
\nBiography
\nAbstr act
\nNeural sequence generation systems oftentimes generat e sequences by searching for the most likely sequence under the learnt pro bability distribution. This assumes that the most likely sequence\, i.e. t he mode\, under such a model must also be the best sequence it has to offe r (often in a given context\, e.g. conditioned on a source sentence in tra nslation). Recent findings in neural machine translation (NMT) show that t he true most likely sequence oftentimes is empty under many state-of-the-a rt NMT models. This follows a large list of other pathologies and biases o bserved in NMT and other sequence generation models: a length bias\, large r beams degrading performance\, exposure bias\, and many more. Many of the se works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this: it is mode-seeking search \, e.g. beam search\, that introduces many of these pathologies and biases \, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass ove r many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation distributions do capture imp ortant aspects of translation well in expectation. Therefore\, we advocate for decision rules that take into account the entire probability distribu tion and not just its mode. We provide one example of such a decision rule \, and show that this is a fruitful research direction.
\nBi ography
\nI am an assistant professor (UD) in natu ral language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language Learning group.
\nMy work concerns the design of models and algorithms that learn to represe nt\, understand\, and generate language data. Examples of specific problem s I am interested in include language modelling\, machine translation\, sy ntactic parsing\, textual entailment\, text classification\, and question answering.
\nI also develop techniques to approach general machine l earning problems such as probabilistic inference\, gradient and density es timation.
\nMy interests sit at the intersection of disciplines such as statistics\, machine learning\, approximate inference\, global optimiz ation\, formal languages\, and computational linguistics.
\n\n< p> \n X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics\, designed for generalization across diverse environments an d instructions. This model is focused on scalable data-driven learning\, w hich is task-agnostic\, leverages simulation\, learns from past experience \, and can be quickly adapted to work in the real-world through limited in teractions. In this talk\, we’ll share some of our recent work in this dir ection in both manipulation and locomotion applications.\nBiography\nCarol ina Parada is a Senior Engineering Manager at Google Robotics. She leads t he robot-mobility group\, which focuses on improving robot motion planning \, navigation\, and locomotion\, using reinforcement learning. Prior to th at\, she led the camera perception team for self-driving cars at Nvidia fo r 2 years. She was also a lead with Speech @ Google for 7 years\, where sh e drove multiple research and engineering efforts that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela an d moved to the US to pursue a B.S. and M.S. degree in Electrical Engineeri ng at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP). DTSTART;TZID=America/New_York:20210423T120000 DTEND;TZID=America/New_York:20210423T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google” URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nRobotics@Google’s mission is to make robots useful i n the real world through machine learning. We are excited about a new mode l for robotics\, designed for generalization across diverse environments a nd instructions. This model is focused on scalable data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experienc e\, and can be quickly adapted to work in the real-world through limited i nteractions. In this talk\, we’ll share some of our recent work in this di rection in both manipulation and locomotion applications.
\n< strong>Biography
\nCarolina Parad a is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group\, which focuses on improving robot motion planning\, navigation\, and locomotion\, using reinforcement learning. Prior to that \, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove multiple research and engineering efforts that enabled Ok Google\, t he Google Assistant\, and Voice-Search. Carolina< /span> grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd a t Johns Hopkins University at the Center for Language and Speech Processin g (CLSP).
\n X-TAGS;LANGUAGE=en-US:2021\,April\,Parada END:VEVENT BEGIN:VEVENT UID:ai1ec-21494@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost im perceptible to humans\, these attacks pose a severe security threat to the state-of-the-art speech and speaker recognition systems\, making it vital to propose countermeasures against them. In this talk\, we focus on 1) cl assification of a given adversarial attack into attack algorithm type\, th reat model type\, and signal-to-adversarial-noise ratios\, 2) developing a novel speech denoising solution to further improve the classification per formance. \nOur proposed approach uses an x-vector network as a signature extractor to get embeddings\, which we call signatures. These signatures c ontain information about the attack and can help classify different attack algorithms\, threat models\, and signal-to-adversarial-noise ratios. We d emonstrate the transferability of such signatures to other tasks. In parti cular\, a signature extractor trained to classify attacks against speaker identification can also be used to classify attacks against speaker verifi cation and speech recognition. We also show that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly \, we propose to improve the signature extractor by making the job of the signature extractor easier by removing the clean signal from the adversari al example (which consists of clean signal+perturbation). We train our sig nature extractor using adversarial perturbation. At inference time\, we us e a time-domain denoiser to obtain adversarial perturbation from adversari al examples. Using our improved approach\, we show that common attacks in the literature (Fast Gradient Sign Method (FGSM)\, Projected Gradient Desc ent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks with an equal error rate (EER) of a bout 9%\, which is very promising. DTSTART;TZID=America/New_York:20220304T120000 DTEND;TZID=America/New_York:20220304T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Sonal Joshi “Classify and Detect Adversarial Atta cks Against Speaker and Speech Recognition Systems” URL:https://www.clsp.jhu.edu/events/student-seminar-sonal-joshi/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAdversarial attacks deceive neural network systems by adding carefully crafted perturbations to benign signals. Being almost imperceptible to humans\, these attacks pose a severe security thr eat to the state-of-the-art speech and speaker recognition systems\, makin g it vital to propose countermeasures against them. In this talk\, we focu s on 1) classification of a given adversarial attack into attack algorithm type\, threat model type\, and signal-to-adversarial-noise ratios\, 2) de veloping a novel speech denoising solution to further improve the classifi cation performance.
\nOur proposed approach uses a n x-vector network as a signature extractor to get embeddings\, which we c all signatures. These signatures contain information about the attack and can help classify different attack algorithms\, threat models\, and signal -to-adversarial-noise ratios. We demonstrate the transferability of such s ignatures to other tasks. In particular\, a signature extractor trained to classify attacks against speaker identification can also be used to class ify attacks against speaker verification and speech recognition. We also s how that signatures can be used to detect unknown attacks i.e. attacks not included during training. Lastly\, we propose to improve the signature e xtractor by making the job of the signature extractor easier by removing t he clean signal from the adversarial example (which consists of clean sign al+perturbation). We train our signature extractor using adversarial pertu rbation. At inference time\, we use a time-domain denoiser to obtain adver sarial perturbation from adversarial examples. Using our improved approach \, we show that common attacks in the literature (Fast Gradient Sign Metho d (FGSM)\, Projected Gradient Descent (PGD)\, Carlini-Wagner (CW) ) can be classified with accuracy as high as 96%. We also detect unknown attacks w ith an equal error rate (EER) of about 9%\, which is very promising.
\n X-TAGS;LANGUAGE=en-US:2022\,Joshi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21615@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\n\n\nWe consider a problem of data collection for sema ntically rich NLU tasks\, where detailed semantics of documents (or uttera nces) are captured using a complex meaning representation. Previously\, d ata collection for such tasks was either handled at the cost of extensive annotator training (e.g. in FrameNet or PropBank) or simplified meaning re presentation (e.g. in QA-SRL or Overnight). In this talk\, we present two systems [1\, 2] that aim to support fast\, accurate\, and expressive sema ntic annotations by pairing human workers with a trained model in the loop .\n\nThe first system\, called Guided K-best [1]\, is an annotation toolki t for conversational semantic parsing. Instead of typing annotations from scratch\, data specialists choose a correct parse from the K-best output of a few-shot prototyped model. As the K-best list can be large (e.g. K=1 00)\, we guide the annotators’ exploration of the K-best list via explaina ble hierarchical clustering. In addition\, we experiment with RoBERTa-bas ed reranking of the K-best list to recalibrate the few-shot model towards Accuracy@K. The final system allows to annotate data up to 35% faster tha n the standard\, non-guided K-best and improves the few-shot model’s top-1 accuracy by up to 18%. The second system\, called SchemaBlocks [2]\, is an annotation toolkit for schemas\, or structured descriptions of frequent real-world scenarios (e.g.\, cooking a meal). It represents schemas in t he annotation UI as nested blocks. Using a novel Causal ARM model\, we fu rther speed up the annotation process and guide data specialists towards e xpressive and diverse schemas. As part of this work\, we collect 232 sche mas\, evaluating their internal coherence and their coverage on large-scal e newswire corpora.\n\n\n DTSTART;TZID=America/New_York:20220311T120000 DTEND;TZID=America/New_York:20220311T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on Collecting Semantic Annotations” URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for -human-ai-cooperation-on-collecting-semantic-annotations/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\n\n X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21621@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.\nBiography\nAlane Su hr is a PhD Candidate in the Department of Computer Science at Cornell Uni versity\, advised by Yoav Artzi. Her research spans natural language proc essing\, machine learning\, and computer vision\, with a focus on building systems that participate and continually learn in situated natural langua ge interactions with human users. Alane’s work has been recognized by pape r awards at ACL and NAACL\, and has been supported by fellowships and gran ts\, including an NSF Graduate Research Fellowship\, a Facebook PhD Fellow ship\, and research awards from AI2\, ParlAI\, and AWS. Alane has also co- organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane received a BS in Computer Science and Engineering as an Eminence Fellow at the Ohio State University. DTSTART;TZID=America/New_York:20220314T120000 DTEND;TZID=America/New_York:20220314T131500 LOCATION:Virtual Seminar SEQUENCE:0 SUMMARY:Alane Suhr (Cornell University) “Reasoning and Learning in Interact ive Natural Language Systems” URL:https://www.clsp.jhu.edu/events/alane-suhr-cornell-university-reasoning -and-learning-in-interactive-natural-language-systems/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nSystems that support expressive\, situated natural la nguage interactions are essential for expanding access to complex computin g systems\, such as robots and databases\, to non-experts. Reasoning and l earning in such natural language interactions is a challenging open proble m. For example\, resolving sentence meaning requires reasoning not only ab out word meaning\, but also about the interaction context\, including the history of the interaction and the situated environment. In addition\, the sequential dynamics that arise between user and system in and across inte ractions make learning from static data\, i.e.\, supervised data\, both ch allenging and ineffective. However\, these same interaction dynamics resul t in ample opportunities for learning from implicit and explicit feedback that arises naturally in the interaction. This lays the foundation for sys tems that continually learn\, improve\, and adapt their language use throu gh interaction\, without additional annotation effort. In this talk\, I wi ll focus on these challenges and opportunities. First\, I will describe ou r work on modeling dependencies between language meaning and interaction c ontext when mapping natural language in interaction to executable code. In the second part of the talk\, I will describe our work on language unders tanding and generation in collaborative interactions\, focusing on continu al learning from explicit and implicit user feedback.
\nBiog raphy
\nAlane Suhr is a PhD Candidate in the Department of Computer Science at Cornell University\, advised by Yoav Artzi. Her resea rch spans natural language processing\, machine learning\, and computer vi sion\, with a focus on building systems that participate and continually l earn in situated natural language interactions with human users. Alane’s w ork has been recognized by paper awards at ACL and NAACL\, and has been su pported by fellowships and grants\, including an NSF Graduate Research Fel lowship\, a Facebook PhD Fellowship\, and research awards from AI2\, ParlA I\, and AWS. Alane has also co-organized multiple workshops and tutorials appearing at NeurIPS\, EMNLP\, NAACL\, and ACL. Previously\, Alane receive d a BS in Computer Science and Engineering as an Eminence Fellow at the Oh io State University.
\n X-TAGS;LANGUAGE=en-US:2022\,March\,Suhr END:VEVENT BEGIN:VEVENT UID:ai1ec-21616@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization. DTSTART;TZID=America/New_York:20220318T120000 DTEND;TZID=America/New_York:20220318T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Keith Harrigian “The Problem of Semantic Shift in Longitudinal Monitoring of Social Media” URL:https://www.clsp.jhu.edu/events/student-seminar-keith-harrigian-the-pro blem-of-semantic-shift-in-longitudinal-monitoring-of-social-media/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nSocial media allows researchers to track societal and cultural changes over time based on language analysis tools. Many of thes e tools rely on statistical algorithms which need to be tuned to specific types of language. Recent studies have shown the absence of appropriate tu ning\, specifically in the presence of semantic shift\, can hinder robustn ess of the underlying methods. However\, little is known about the practic al effect this sensitivity may have on downstream longitudinal analyses. W e explore this gap in the literature through a timely case study: understa nding shifts in depression during the course of the COVID-19 pandemic. We find that inclusion of only a small number of semantically-unstable featur es can promote significant changes in longitudinal estimates of our target outcome. At the same time\, we demonstrate that a recently-introduced met hod for measuring semantic shift may be used to proactively identify failu re points of language-based models and\, in turn\, improve predictive gene ralization.
\n X-TAGS;LANGUAGE=en-US:2022\,Harrigian\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-21497@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.\nBiography\nKyle Gorman is an assistant professor of linguist ics at the Graduate Center\, City University of New York\, and director of the master’s program in computational linguistics\; he is also a software engineer in the speech and language algorithms group at Google. With Rich ard Sproat\, he is the coauthor of Finite-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processin g library for Python. He has also published on statistical methods for com paring computational models\, text normalization\, grapheme-to-phoneme con version\, and morphological analysis\, as well as many topics in linguisti c theory. DTSTART;TZID=America/New_York:20220401T120000 DTEND;TZID=America/New_York:20220401T131500 LOCATION:Ames Hall 234 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Kyle Gorman (City University of New York) ” Weighted Finite-State T ransducers: The Later Years” URL:https://www.clsp.jhu.edu/events/kyle-gorman-city-university-of-new-york -weighted-finite-state-transducers-the-later-years/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nWhile the “deep learning tsunami” continues to define the state of the art in speech and language processing\, finite-state tra nsducer grammars developed by linguists and engineers are still widely use d in industrial\, highly-multilingual settings\, particularly for symbolic \, “front-end” speech applications. In this talk\, I will first briefly re view the current state of the OpenFst and OpenGrm finite-state transducer libraries. I then review two “late-breaking” algorithms found in these lib raries. The first is a heuristic but highly-effective general-purpose opti mization routine for weighted transducers. The second is an algorithm for computing the single shortest string of non-deterministic weighted accepto rs which lack certain properties required by classic shortest-path algorit hms. I will then illustrate how the OpenGrm tools can be used to induce a finite-state string-to-string transduction model known as a pair n-gram mo del. This model has been applied to grapheme-to-phoneme conversion\, loanw ord detection\, abbreviation expansion\, and back-transliteration\, among other tasks.
\nBiography
\nKyle Gorman is an assistant professor of linguistics at the Graduate Center\, City Universit y of New York\, and director of the master’s program in computational ling uistics\; he is also a software engineer in the speech and language algori thms group at Google. With Richard Sproat\, he is the coauthor of Finit e-State Text Processing (Morgan & Claypool\, 2021) and the creator of Pynini\, a finite-state text processing library for Python. He has also pu blished on statistical methods for comparing computational models\, text n ormalization\, grapheme-to-phoneme conversion\, and morphological analysis \, as well as many topics in linguistic theory.
\n X-TAGS;LANGUAGE=en-US:2022\,Gorman\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23320@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nSpeech communications represents a core domain for ed ucation\, team problem solving\, social engagement\, and business interact ions. The ability for Speech Technology to extract layers of knowledge and assess engagement content represents the next generation of advanced spee ch solutions. Today\, the emergence of BIG DATA\, Machine Learning\, as we ll as voice enabled speech systems have required the need for effective vo ice capture and automatic speech/speaker recognition. The ability to emplo y speech and language technology to assess human-to-human interactions off ers new research paradigms having profound impact on assessing human inter action. In this talk\, we will focus on big data naturalistic audio proces sing relating to (i) child learning spaces\, and (ii) the NASA APOLLO luna r missions. ML based technology advancements include automatic audio diari zation\, speech recognition\, and speaker recognition. Child-Teacher based assessment of conversational interactions are explored\, including keywor d and “WH-word” (e.g.\, who\, what\, etc.). Diarization processing solutio ns are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 cor pus\, resulting in a massive multi-track audio processing challenge to mak e available 150\,000hrs of Apollo mission data to be shared with science c ommunities: (i) speech/language technology\, (ii) STEM/science and team-ba sed researchers\, and (iii) education/historical/archiving specialists. Ou r goals here are to provide resources which allow to better understand how people work/learn collaboratively together. For Apollo\, to accomplish on e of mankind’s greatest scientific/technological challenges in the last ce ntury.\nBiography\nJohn H.L. Hansen\, received Ph.D. & M.S. degrees from G eorgia Institute of Technology\, and B.S.E.E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 2005\, where he currently serves a s Associate Dean for Research\, Prof. of ECE\, Distinguished Univ. Chair i n Telecom. Engineering\, and directs Center for Robust Speech Systems (CRS S). He is an ISCA Fellow\, IEEE Fellow\, and has served as Member and TC-C hair of IEEE Signal Proc. Society\, Speech & Language Proc. Tech. Comm.(SL TC)\, and Technical Advisor to U.S. Delegate for NATO (IST/TG-01). He serv ed as ISCA President (2017-21)\, continues to serve on ISCA Board (2015-23 ) as Treasurer\, has supervised 99 PhD/MS thesis candidates (EE\,CE\,BME\, TE\,CS\,Ling.\,Cog.Sci.\,Spch.Sci.\,Hear.Sci)\, was recipient of 2020 UT-D allas Provost’s Award for Grad. PhD Research Mentoring\; author/co-author of 865 journal/conference papers including 14 textbooks in the field of sp eech/language/hearing processing & technology including coauthor of textbo ok Discrete-Time Processing of Speech Signals\, (IEEE Press\, 2000)\, and lead author of the report “The Impact of Speech Under ‘Stress’ on Military Speech Technology\,” (NATO RTO-TR-10\, 2000). He served as Organizer\, Ch air/Co-Chair/Tech.Chair for ISCA INTERSPEECH-2022\, IEEE ICASSP-2010\, IEE E SLT-2014\, ISCA INTERSPEECH-2002\, and Tech. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processing Society Leo Beranek MERITORIO US SERVICE Award.\n DTSTART;TZID=America/New_York:20230303T120000 DTEND;TZID=America/New_York:20230303T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:John Hansen (University of Texas at Dallas) “Challenges and Advance ments in Speaker Diarization & Recognition for Naturalistic Data Streams” URL:https://www.clsp.jhu.edu/events/john-hansen-university-of-texas-at-dall as/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nSpeech communications represents a core domain for ed ucation\, team problem solving\, social engagement\, and business interact ions. The ability for Speech Technology to extract layers of knowledge and assess engagement content represents the next generation of advanced spee ch solutions. Today\, the emergence of BIG DATA\, Machine Learning\, as we ll as voice enabled speech systems have required the need for effective vo ice capture and automatic speech/speaker recognition. The ability to emplo y speech and language technology to assess human-to-human interactions off ers new research paradigms having profound impact on assessing human inter action. In this talk\, we will focus on big data naturalistic audio proces sing relating to (i) child learning spaces\, and (ii) the NASA APOLLO luna r missions. ML based technology advancements include automatic audio diari zation\, speech recognition\, and speaker recognition. Child-Teacher based assessment of conversational interactions are explored\, including keywor d and “WH-word” (e.g.\, who\, what\, etc.). Diarization processing solutio ns are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 cor pus\, resulting in a massive multi-track audio processing challenge to mak e available 150\,000hrs of Apollo mission data to be shared with science c ommunities: (i) speech/language technology\, (ii) STEM/science and team-ba sed researchers\, and (iii) education/historical/archiving specialists. Ou r goals here are to provide resources which allow to better understand how people work/learn collaboratively together. For Apollo\, to accomplish on e of mankind’s greatest scientific/technological challenges in the last ce ntury.
\nBiography
\nJohn H.L. Hansen\, recei ved Ph.D. & M.S. degrees from Georgia Institute of Technology\, and B.S.E. E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 200 5\, where he currently serves as Associate Dean for Research\, Prof. of EC E\, Distinguished Univ. Chair in Telecom. Engineering\, and directs Center for Robust Speech Systems (CRSS). He is an ISCA Fellow\, IEEE Fellow\, an d has served as Member and TC-Chair of IEEE Signal Proc. Society\, Speech & Language Proc. Tech. Comm.(SLTC)\, and Technical Advisor to U.S. Delegat e for NATO (IST/TG-01). He served as ISCA President (2017-21)\, continues to serve on ISCA Board (2015-23) as Treasurer\, has supervised 99 PhD/MS t hesis candidates (EE\,CE\,BME\,TE\,CS\,Ling.\,Cog.Sci.\,Spch.Sci.\,Hear.Sc i)\, was recipient of 2020 UT-Dallas Provost’s Award for Grad. PhD Researc h Mentoring\; author/co-author of 865 journal/conference papers including 14 textbooks in the field of speech/language/hearing processing & technolo gy including coauthor of textbook Discrete-Time Processing of Speech Signa ls\, (IEEE Press\, 2000)\, and lead author of the report “The Impact of Sp eech Under ‘Stress’ on Military Speech Technology\,” (NATO RTO-TR-10\, 200 0). He served as Organizer\, Chair/Co-Chair/Tech.Chair for ISCA INTERSPEEC H-2022\, IEEE ICASSP-2010\, IEEE SLT-2014\, ISCA INTERSPEECH-2002\, and Te ch. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processin g Society Leo Beranek MERITORIOUS SERVICE Award.
\n\n X-TAGS;LANGUAGE=en-US:2023\,Hansen\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23439@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAs data-based technologies proliferate\, it is increa singly important for researchers to be aware of their work’s wider impact. Concerns like navigating the IRB and figuring out copyright and licensing issues are still key\, but the current focus shift to matters like inclus ivity\, fairness\, and transparency and their impact on the research/devel opment life cycle have added complexity to the research task. In this talk \, we will take a broad look at the various ways ethics intersects with na tural language processing\, machine learning\, and artificial intelligence research and discuss strategies and resources for managing these concerns within the broader research framework.\nBiography\nDenise is responsible for the overall operation of LDC’s External Relations group which includes intellectual property management\, licensing\, regulatory matters\, publi cations\, membership and communications. Before joining LDC\, she practice d law for over 20 years in the areas of international trade\, intellectual property and commercial litigation. She has an A.B. in Political Science from Bryn Mawr College and a Juris Doctor degree from the University of Mi ami School of Law. DTSTART;TZID=America/New_York:20230310T120000 DTEND;TZID=America/New_York:20230310T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street SEQUENCE:0 SUMMARY:Denise DiPersio (Linguistic Data Consortium\, University of Pennsyl vania) “Data and Ethics: Where Does the Twain Meet?” URL:https://www.clsp.jhu.edu/events/denise-dipersio-linguistic-data-consort ium-university-of-pennsylvania-data-and-ethics-where-does-the-twain-meet/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nAs data-based technologies proliferate\, it is increa singly important for researchers to be aware of their work’s wider impact. Concerns like navigating the IRB and figuring out copyright and licensing issues are still key\, but the current focus shift to matters like inclus ivity\, fairness\, and transparency and their impact on the research/devel opment life cycle have added complexity to the research task. In this talk \, we will take a broad look at the various ways ethics intersects with na tural language processing\, machine learning\, and artificial intelligence research and discuss strategies and resources for managing these concerns within the broader research framework.
\nBiography
\nDenise is responsible for the overall operation of LDC’s External Relations group which includes intellectual property management\, licensi ng\, regulatory matters\, publications\, membership and communications. Be fore joining LDC\, she practiced law for over 20 years in the areas of int ernational trade\, intellectual property and commercial litigation. She ha s an A.B. in Political Science from Bryn Mawr College and a Juris Doctor d egree from the University of Miami School of Law.
\n X-TAGS;LANGUAGE=en-US:2023\,DiPersio\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-23505@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRecent advances in large pretrained language models h ave unlocked new exciting applications for Natural Language Generation for creative tasks\, such as lyrics or humour generation. In this talk we wil l discuss recent works by our team at Alexa AI and discuss current challen ges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun classification and gener ation tasks. (2) Song lyric generation: we design a hierarchical lyric gen eration framework that enables us to generate pleasantly-singable lyrics w ithout training on melody-lyric aligned data\, and show that our approach is competitive with strong baselines supervised on parallel data. (3) Crea te with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem wi th story visualization and background music generation models to produce m ultimodal stories for kids.\nBiography\nAlessandra Cervone is an Applied S cientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburg h and a PhD in CS from University of Trento (Italy). During her PhD\, Ales sandra worked on computational models of coherence in open-domain dialogue advised by Giuseppe Riccardi. In the first year of the PhD\, she was the team leader of one of the teams selected to compete in the first edition o f the Alexa Prize. More recently\, her research interests have been focuse d on natural language generation and its evaluation\, in particular in the context of creative AI applications. DTSTART;TZID=America/New_York:20230317T120000 DTEND;TZID=America/New_York:20230317T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alessandra Cervone (Amazon) “Controllable Text Generation for Creat ive Applications URL:https://www.clsp.jhu.edu/events/alexxandra-cervone-amazon/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nRecent advances in large pretrain ed language models have unlocked new exciting applications for Natural Lan guage Generation for creative tasks\, such as lyrics or humour generation. In this talk we will discuss recent works by our team at Alexa AI and dis cuss current challenges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun clas sification and generation tasks. (2) Song lyric generation: we design a hi erarchical lyric generation framework that enables us to generate pleasant ly-singable lyrics without training on melody-lyric aligned data\, and sho w that our approach is competitive with strong baselines supervised on par allel data. (3) Create with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem with story visualization and background music generation models to produce multimodal stories for kids.
\nBiography< /strong>
\nAlessandra Cervone is an Applied Scientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburgh and a PhD in CS from University of Trento (Italy). During her PhD\, Alessandra worked on comput ational models of coherence in open-domain dialogue advised by Giuseppe Ri ccardi. In the first year of the PhD\, she was the team leader of one of t he teams selected to compete in the first edition of the Alexa Prize. More recently\, her research interests have been focused on natural language g eneration and its evaluation\, in particular in the context of creative AI applications.
\n \\nAbstr act
\nDespite many recent advances in automatic speech reco gnition (ASR)\, linguists and language communities engaged in language doc umentation projects continue to face the obstacle of the “transcription bo ttleneck”. Researchers in NLP typically do not distinguish between widely spoken languages that currently happen to have few training resources and endangered languages that will never have abundant data. As a result\, we often fail to thoroughly explore when ASR is helpful for language document ation\, what architectures work best for the sorts of languages that are i n need of documentation\, and how data can be collected and organized to p roduce optimal results. In this talk I describe several projects that atte mpt to bridge the gap between the promise of ASR for language documentatio n and the reality of using this technology in real-world settings.
\nBiography
\nAbstr act
\nHow important are different temporal speech modulations for speec h recognition? We answer this question from two complementary perspectives . Firstly\, we quantify the amount of phonetic information in the modulati on spectrum of speech by computing the mutual information between temporal modulations with frame-wise phoneme labels. Looking from another perspect ive\, we ask – which speech modulations an Automatic Speech Recognition (A SR) system prefers for its operation. Data-driven weights are learned over the modulation spectrum and optimized for an end-to-end ASR task. Both me thods unanimously agree that speech information is mostly contained in slo w modulation. Maximum mutual information occurs around 3-6 Hz which also h appens to be the range of modulations most preferred by the ASR. In additi on\, we show that the incorporation of this knowledge into ASRs significan tly reduces their dependency on the amount of training data.
\n< p> \nLearning How to Play With The Machines: Taking Stock of Where the Collaboration Between Computational and Social Science Stands
\n\n
Speakers: Jeff Gill\, Ernesto Calvo\, Hale Sirin and Antonios Anastasopoulos
\n X-TAGS;LANGUAGE=en-US:2023\,April\,APSA Roundtable END:VEVENT BEGIN:VEVENT UID:ai1ec-23586@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230410T120000 DTEND;TZID=America/New_York:20230410T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Ruizhe Huang URL:https://www.clsp.jhu.edu/events/student-seminar-ruizhe-huang/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Huang END:VEVENT BEGIN:VEVENT UID:ai1ec-23588@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAdvances in open domain Large Language Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM\, and LLaMA have fa cilitated dramatic improvements in conversational systems. These improveme nts include an unprecedented breadth of conversational interactions betwee n humans and machines while maintaining and sometimes surpassing the accur acy of systems trained specifically for known\, closed domains. However\, many applications still require higher levels of accuracy than pre-trained LLMs can provide. There are many studies underway to accomplish this. Bro adly speaking\, the methods assume the pre-trained models are fixed (due t o cost/time)\, and instead look to various augmentation methods including prompting strategies and model adaptation/fine-tuning.\nOne augmentation s trategy leverages the context of the conversation. For example\, who are t he participants and what is known about these individuals (personal contex t)\, what was just said (dialogue context)\, where is the conversation tak ing place (geo context)\, what time of day and season is it (time context) \, etc. A powerful form of context is the shared visual setting of the co nversation between the human(s) and machine. The shared visual scene may b e from a device (phone\, smart glasses) or represented on a screen (browse r\, maps\, etc.) The elements in the visual context can be exploited by gr ounding the natural language conversational interaction\, thereby changing the priors of certain concepts and increasing the accuracy of the system. In this talk\, I will present some of my historical work in this area as well as my recent work in the AI Virtual Assistant (AVA) Lab at Georgia Te ch.\nBio\nDr. Larry Heck is a Professor with a joint appointment in the Sc hool of Electrical and Computer Engineering and the School of Interactive Computing at the Georgia Institute of Technology. He holds the Rhesa S. Fa rmer Distinguished Chair of Advanced Computing Concepts and is a Georgia R esearch Alliance Eminent Scholar. His received the BSEE from Texas Tech Un iversity (1986)\, and MSEE and PhD EE from the Georgia Institute of Techno logy (1989\,1991). He is a Fellow of the IEEE\, inducted into the Academy of Distinguished Engineering Alumni at Georgia Tech and received the Disti nguished Engineer Award from the Texas Tech University Whitacre College of Engineering. He was a Senior Research Engineer with SRI (1992-98)\, Vice President of R&D at Nuance (1998-2005)\, Vice President of Search and Adve rtising Sciences at Yahoo! (2005-2009)\, Chief Scientist of the Microsoft Speech products and Distinguished Engineer in Microsoft Research (2009-201 4)\, Principal Scientist with Google Research (2014-2017)\, and CEO of Viv Labs and SVP at Samsung (2017-2021).\n\n DTSTART;TZID=America/New_York:20230414T120000 DTEND;TZID=America/New_York:20230414T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Larry Heck (Georgia Institute of Technology) “The AVA Digital Human : Improving Conversational Interactions through Visually Situated Context” URL:https://www.clsp.jhu.edu/events/larry-heck-georgia-institute-of-technol ogy-the-ava-digital-human-improving-conversational-interactions-through-vi sually-situated-context/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAdvances in open domain Large Lan guage Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM \, and LLaMA have facilitated dramatic improvements in conversational syst ems. These improvements include an unprecedented breadth of conversational interactions between humans and machines while maintaining and sometimes surpassing the accuracy of systems trained specifically for known\, closed domains. However\, many applications still require higher levels of accur acy than pre-trained LLMs can provide. There are many studies underway to accomplish this. Broadly speaking\, the methods assume the pre-trained mod els are fixed (due to cost/time)\, and instead look to various augmentatio n methods including prompting strategies and model adaptation/fine-tuning.
\nOne augmentation strategy leverages the conte xt of the conversation. For example\, who are the participants and what is known about these individuals (personal context)\, what was just said (di alogue context)\, where is the conversation taking place (geo context)\, w hat time of day and season is it (time context)\, etc. A powerful form of context is the shared visual setting of the conversation between the huma n(s) and machine. The shared visual scene may be from a device (phone\, sm art glasses) or represented on a screen (browser\, maps\, etc.) The elemen ts in the visual context can be exploited by grounding the natural languag e conversational interaction\, thereby changing the priors of certain conc epts and increasing the accuracy of the system. In this talk\, I will pres ent some of my historical work in this area as well as my recent work in t he AI Virtual Assistant (AVA) Lab at Georgia Tech.
\nBio
\nDr. Larry Heck is a Professor with a joi nt appointment in the School of Electrical and Computer Engineering and th e School of Interactive Computing at the Georgia Institute of Technology. He holds the Rhesa S. Farmer Distinguished Chair of Advanced Computing Con cepts and is a Georgia Research Alliance Eminent Scholar. His received the BSEE from Texas Tech University (1986)\, and MSEE and PhD EE from the Geo rgia Institute of Technology (1989\,1991). He is a Fellow of the IEEE\, in ducted into the Academy of Distinguished Engineering Alumni at Georgia Tec h and received the Distinguished Engineer Award from the Texas Tech Univer sity Whitacre College of Engineering. He was a Senior Research Engineer wi th SRI (1992-98)\, Vice President of R&D at Nuance (1998-2005)\, Vice Pres ident of Search and Advertising Sciences at Yahoo! (2005-2009)\, Chief Sci entist of the Microsoft Speech products and Distinguished Engineer in Micr osoft Research (2009-2014)\, Principal Scientist with Google Research (201 4-2017)\, and CEO of Viv Labs and SVP at Samsung (2017-2021).
\n\n
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Heck END:VEVENT BEGIN:VEVENT UID:ai1ec-23590@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nMachine Translation has the ultimate goal of eliminat ing language barriers. However\, the area has focused mainly on a few lang uages\, leaving many low-resource languages without support. In this talk\ , I will discuss the challenges of bringing translation support for 200 wr itten languages and beyond.\n\nFirst\, I talk about the No Language Left B ehind Project\, where we took on this challenge by first contextualizing t he need for low-resource language translation support through exploratory interviews with native speakers. Then\, we created datasets and models aim ed at narrowing the performance gap between low and high-resource language s. We proposed multiple architectural and training improvements to counter act over-fitting while training on thousands of language-pairs/tasks. We e valuated the performance of over 40\,000 different translation directions. \n\nAfterwards\, I’ll discuss the challenges of pushing translation perfor mance beyond text for languages that don’t have written standards like Hok kien.\nOur models achieve state-of-the-art performance and lay important g roundwork towards realizing a universal translation system. At the same ti me\, we keep making open-source contributions for everyone to keep advanci ng the research for the languages they care about.\nBio\nPaco is Research Scientist Manager supporting translation teams in Meta AI (FAIR). He works in the field of machine translation with a focus on low-resource translat ion (e.g. NLLB\, FLORES) and the aim to break language barriers. He joined Meta in 2016. His research has been published in top-tier NLP venues like ACL\, EMNLP. He was the co-chair of the Research director at AMTA (2020-2 022). He has ave organized several research competitions focused on low-re source translation and data filtering. Paco obtained his PhD from the ITES M in Mexico\, was a visiting scholar at the LTI-CMU from 2008-2009\, and p articipated in DARPA’s GALE evaluation program. Paco was a post-doc and sc ientist at Qatar Computing Research Institute in Qatar in 2012-2016 DTSTART;TZID=America/New_York:20230417T120000 DTEND;TZID=America/New_York:20230417T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Paco Guzman (Meta AI) “Building a Universal Translation System to B reak Down Language Barriers” URL:https://www.clsp.jhu.edu/events/paco-guzman-meta-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOur models achieve state-of-the-art performance and lay important groundwork towards realizing a universal translation system. At the same time\, we keep maki ng open-source contributions for everyone to keep advancing the research f or the languages they care about.
\nBio
\nPac o is Research Scientist Manager supporting translation teams in Meta AI (F AIR). He works in the field of machine translation with a focus on low-res ource translation (e.g. NLLB\, FLORES) and the aim to break language barri ers. He joined Meta in 2016. His research has been published in top-tier N LP venues like ACL\, EMNLP. He was the co-chair of the Research director a t AMTA (2020-2022). He has ave organized several research competitions foc used on low-resource translation and data filtering. Paco obtained his PhD from the ITESM in Mexico\, was a visiting scholar at the LTI-CMU from 200 8-2009\, and participated in DARPA’s GALE evaluation program. Paco was a p ost-doc and scientist at Qatar Computing Research Institute in Qatar in 20 12-2016
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Guzman END:VEVENT BEGIN:VEVENT UID:ai1ec-23592@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge language models (LLMs) have ushered in exciting capabilities in language understanding and text generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indisting uishable from humans. While this has obviously raised conversational syste ms and chatbots to a new level\, it also presents exciting new opportuniti es for building artificial agents with improved decision making capabiliti es. Specifically\, the ability to reason with language can allow us to bui ld agents that can 1) execute complex action sequences to effect change in the world\, 2) learn new skills by ‘reading’ in addition to ‘doing’\, and 3) allow for easier personalization and control over their behavior. In t his talk\, I will demonstrate how we can build such language-enabled agent s that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and robotic tool manipulation. In the end\, I will also discuss some dangers of using these LLM-based system s and some challenges that lie ahead in ensuring their safe use.\nBiograph y\nKarthik Narasimhan is an assistant professor in the Computer Science de partment at Princeton University and a co-Director of the Princeton NLP gr oup. His research spans the areas of natural language processing and reinf orcement learning\, with the goal of building intelligent agents that lear n to operate in the world through both their own experience (”doing things ”) and leveraging existing human knowledge (”reading about things”). Karth ik received his PhD from MIT in 2017\, and spent a year as a visiting rese arch scientist at OpenAI contributing to the GPT language model\, prior to joining Princeton in 2018. His research has been recognized by the NSF CA REER\, a Google Research Scholar Award\, an Amazon research award (2019)\, Bell Labs runner-up prize and outstanding paper awards at EMNLP (2015\, 2 016) and NeurIPS (2022). DTSTART;TZID=America/New_York:20230421T120000 DTEND;TZID=America/New_York:20230421T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karthik Narasimhan (Princeton University) ” Towards General-Purpose Language-Enabled Agents: Machines that can Read\, Think and Act” URL:https://www.clsp.jhu.edu/events/karthik-narasimhan-princeton-university / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge language models (LLMs) have ushered in exciting capabilities in language understanding and text generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indisting uishable from humans. While this has obviously raised conversational syste ms and chatbots to a new level\, it also presents exciting new opportuniti es for building artificial agents with improved decision making capabiliti es. Specifically\, the ability to reason with language can allow us to bui ld agents that can 1) execute complex action sequences to effect change in the world\, 2) learn new skills by ‘reading’ in addition to ‘doing’\, and 3) allow for easier personalization and control over their behavior. In t his talk\, I will demonstrate how we can build such language-enabled agent s that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and robotic tool manipulation. In the end\, I will also discuss some dangers of using these LLM-based system s and some challenges that lie ahead in ensuring their safe use.
\n< strong>Biography
\nKarthik Narasimhan is an assistan t professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning\, with the goal of building intelligent agents that learn to operate in the world through bo th their own experience (”doing things”) and leveraging existing human kno wledge (”reading about things”). Karthik received his PhD from MIT in 2017 \, and spent a year as a visiting research scientist at OpenAI contributin g to the GPT language model\, prior to joining Princeton in 2018. His rese arch has been recognized by the NSF CAREER\, a Google Research Scholar Awa rd\, an Amazon research award (2019)\, Bell Labs runner-up prize and outst anding paper awards at EMNLP (2015\, 2016) and NeurIPS (2022).
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Narasimhan END:VEVENT BEGIN:VEVENT UID:ai1ec-23606@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230424T120000 DTEND;TZID=America/New_York:20230424T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Brian Lu URL:https://www.clsp.jhu.edu/events/student-seminar-brian-lu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Lu END:VEVENT BEGIN:VEVENT UID:ai1ec-23608@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAutomated analysis of student writing has the potenti al to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructors to assess students’ reaso ning skills based on their long-form writing. Further\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of written communication. Our meth ods focus on the specific ideas students express to support formative asse ssment through different kinds of feedback\, which aims to scaffold their abilities to reason and communicate. In this talk I review our work in the PSU NLP lab on methods for automated assessment of different forms of stu dent writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaboration with researchers in STEM education\, results from deployment of an older content analysis tool on middle school physics essays\, and very preliminary results on assessment of college students’ physics lab reports. I will also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.\nBio\nBecky Passonneau has been a Professor in the Department of Computer Science and Engineering at Penn St ate University since 2016\, when she joined as the first NLP researcher. S ince that time the NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s research in natural language processing addresses computation al pragmatics\, meaning the investigation of language as a system of inter active behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chicago in 1985\, and worked at sev eral academic and industry research labs before joining Penn State. Her wo rk is reported in over 140 publications in journals and refereed conferenc e proceedings\, and has been funded through 27 sponsored projects from 16 sources\, including government agencies\, corporate sponsors\, corporate gifts\, and foundations.. DTSTART;TZID=America/New_York:20230428T120000 DTEND;TZID=America/New_York:20230428T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Becky Passonneau (Penn State University) ” Automated Support to Sca ffold Students’ Short- and Long-form STEM Writing” URL:https://www.clsp.jhu.edu/events/becky-passonneau-penn-state-university- automated-support-to-scaffold-students-short-and-long-form-stem-writing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAutomated analysis of student writing has the potenti al to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructors to assess students’ reaso ning skills based on their long-form writing. Further\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of written communication. Our meth ods focus on the specific ideas students express to support formative asse ssment through different kinds of feedback\, which aims to scaffold their abilities to reason and communicate. In this talk I review our work in the PSU NLP lab on methods for automated assessment of different forms of stu dent writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaboration with researchers in STEM education\, results from deployment of an older content analysis tool on middle school physics essays\, and very preliminary results on assessment of college students’ physics lab reports. I will also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.
\nBio
\nBecky Passonneau has been a Professor in the Department of Computer Sci ence and Engineering at Penn State University since 2016\, when she joined as the first NLP researcher. Since that time the NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s research in natural language p rocessing addresses computational pragmatics\, meaning the investigation o f language as a system of interactive behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chic ago in 1985\, and worked at several academic and industry research labs be fore joining Penn State. Her work is reported in over 140 publications in journals and refereed conference proceedings\, and has been funded through 27 sponsored projects from 16 sources\, including government agencies\, corporate sponsors\, corporate gifts\, and foundations..
\n X-TAGS;LANGUAGE=en-US:2023\,April\,Passonneau END:VEVENT BEGIN:VEVENT UID:ai1ec-24459@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240301T120000 DTEND;TZID=America/New_York:20240301T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mohit Iyyer “Improving\, Evaluating and Detecting Long-Form LLM-Gen erated Text” URL:https://www.clsp.jhu.edu/events/mohit-iyyer-improving-evaluating-and-de tecting-long-form-llm-generated-text/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Iyyer\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24461@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena. DTSTART;TZID=America/New_York:20240304T120000 DTEND;TZID=America/New_York:20240304T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rachel Wicks (JHU) “To Sentences and Beyond: Paving the Way for Con text-Aware Machine Translation” URL:https://www.clsp.jhu.edu/events/rachel-wicks-jhu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nMost machine translation systems operate on the sente nce-level while humans write and translate within a given context. Operati ng on individual sentences forces error-prone sentence segmentation into t he machine translation pipeline. This limits the upper-bound performance o f these systems by creating noisy training bitext. Further\, many grammati cal features necessitate inter-sentential context in order to translate wh ich makes perfect sentence-level machine translation an impossible task. I n this talk\, we will cover the inherent limits of sentence-level machine translation. Following this\, we will explore a key obstacle in the way of true context-aware machine translation—an abject lack of data. Finally\, we will cover recent work that provides (1) a new evaluation dataset that specifically addresses the translation of context-dependent discourse phe nomena and (2) reconstructed documents from large-scale sentence-level bit ext that can be used to improve performance when translating these types o f phenomena.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wicks END:VEVENT BEGIN:VEVENT UID:ai1ec-24465@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.\nBio\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on Large Language Models(LLM) and it s application to generative question-answering. She has worked on multilin gual instruction-tuning for Gemini and multilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD de gree in Language Technologies at Carnegie Mellon University. During her Ph D\, she mainly worked on developing data-efficient natural language proces sing~(NLP) systems. She has made several contributions in data selection\, data representation\, and model adaptation for multilingual NLP. DTSTART;TZID=America/New_York:20240308T120000 DTEND;TZID=America/New_York:20240308T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Cindy Wang (Google DeepMind) “Building Data-Efficient and Reliable Applications with Large Language Models” URL:https://www.clsp.jhu.edu/events/cindy-wang-google-deepmind-building-dat a-efficient-and-reliable-applications-with-large-language-models/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nLarge Language Models (LLMs) have demonstrated remark able capabilities across various domains. However\, it is still very chall enging to build highly-reliable applications with LLMs that support specia lized use cases. LLMs trained on web data often excel at capturing general language patterns\, but they could struggle to support specialized domain s and personalized user needs. Moreover\, LLMs can produce errors that are deceptively plausible\, making them potentially dangerous for high-trust scenarios. In this talk\, I will discuss some of our recent efforts in add ressing these challenges with data-efficient tuning methods and a novel fa ctuality evaluation framework. Specifically\, my talk will focus on buildi ng multilingual applications\, one crucial use case often characterized by limited tuning and evaluation data.
\nBio
\nXinyi(Cindy) Wang is a research scientist at Google DeepMind working on La rge Language Models(LLM) and its application to generative question-answer ing. She has worked on multilingual instruction-tuning for Gemini and mult ilingual generative models used in Google search. Before Google DeepMind\, Cindy Wang obtained her PhD degree in Language Technologies at Carnegie M ellon University. During her PhD\, she mainly worked on developing data-ef ficient natural language processing~(NLP) systems. She has made several co ntributions in data selection\, data representation\, and model adaptation for multilingual NLP.
\n X-TAGS;LANGUAGE=en-US:2024\,March\,Wang END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nThe speech field is evolving to solve more challengin g scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the many types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, a nd worked with any microphone array\, in both solo and multi-talker enviro nments. Our research enhances previous multichannel speech processing effo rts in four key areas: 1) Adaptability: Contrasting traditional models con strained to certain microphone array configurations\, our encoder is unive rsally compatible. 2) MultiTask Capability: Beyond the single-task focus o f previous systems\, UniX-Encoder acts as a robust upstream model\, adeptl y extracting features for diverse tasks including ASR and speaker recognit ion. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to mod els that first beamform then process single-channels\, our encoder offers an end-to-end solution\, bypassing explicit beamforming or separation. To validate its effectiveness\, we tested the UniXEncoder on a synthetic mult i-channel dataset from the LibriSpeech corpus. Across tasks like speech re cognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend. DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings wi th multiple simultaneous talkers. Given the many types of microphone setup s out there\, we present the UniX-Encoder. It’s a universal encoder design ed for multiple tasks\, and worked with any microphone array\, in both sol o and multi-talker environments. Our research enhances previous multichann el speech processing efforts in four key areas: 1) Adaptability: Contrasti ng traditional models constrained to certain microphone array configuratio ns\, our encoder is universally compatible. 2) MultiTask Capability: Beyon d the single-task focus of previous systems\, UniX-Encoder acts as a robus t upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integr ation: In contrast to models that first beamform then process single-chann els\, our encoder offers an end-to-end solution\, bypassing explicit beamf orming or separation. To validate its effectiveness\, we tested the UniXEn coder on a synthetic multi-channel dataset from the LibriSpeech corpus. Ac ross tasks like speech recognition and speaker diarization\, our encoder c onsistently outperformed combinations like the WavLM model with the Beamfo rmIt frontend.
\n X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.\nBiography\nEunsol Choi is an assistant profess or in the Computer Science department at the University of Texas at Austin . Prior to UT\, she spent a year at Google AI as a visiting researcher. He r research area spans natural language processing and machine learning. Sh e is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fel lowship\, Google faculty research award\, Sony faculty award\, and an outs tanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and comp uter science from Cornell University. DTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nNatural language provides an intuitive and powerful i nterface to access knowledge at scale. Modern language systems draw inform ation from two rich knowledge sources: (1) information stored in their par ameters during massive pretraining and (2) documents retrieved at inferenc e time. Yet\, we are far from building systems that can reliably provide i nformation from such knowledge sources. In this talk\, I will discuss path s for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a co mpressor that maps retrieved documents into textual summaries prior to in- context integration. This not only reduces the computational costs but als o filters irrelevant or incorrect information. In the second half of the t alk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent models from reciting outdated i nformation by identifying facts that are prone to rapid change. I will con clude my talk by proposing an interactive system that can elicit informati on from users when needed.
\nBiography
\nEunsol Choi is an assistant professor in the Computer Scie nce department at the University of Texas at Austin. Prior to UT\, she spe nt a year at Google AI as a visiting researcher. Her research area spans n atural language processing and machine learning. She is particularly inter ested in interpreting and reasoning about text in a dynamic real world con text. She is a recipient of a Facebook research fellowship\, Google facult y research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from Unive rsity of Washington and B.A in mathematics and computer science from Corne ll University.
\n\n X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24489@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nOver the past decade\, the field of Speech Generation has seen significant progress in enhancing speech quality and naturalness . Despite these advancements\, persistent challenges such as speech noise\ , limited high-quality data availability\, and the lack of robustness in s peech generation systems remain. Additionally\, the evaluation of speech p resents a significant obstacle for comprehensive assessment at scale. Conc urrently\, recent breakthroughs in Large Language Models (LLMs) have revol utionized text generation and natural language processing. However\, the c omplexity of spoken language introduces unique hurdles\, including managin g long speech waveform sequences. In this presentation\, I will explore re cent innovations in speech synthesis with spoken language modeling\, evalu ation for generative speech systems and high-fidelity speech enhancement. Finally\, I will discuss prospective avenues for future research aimed at addressing these challenges.\nBio\nSoumi Maiti is a postdoctoral researche r at Language Technologies Institute\, Carnegie Mellon University\, where she works on speech and language processing. Her research broadly focuses on building intelligent systems that can communicate with humans naturally . She earned a Ph.D. from the Graduate Center\, City University of New Yo rk (CUNY) with the Graduate Center Fellowship advised by Prof Michael Mand el. She earned her B.Tech. in Computer Science from the Indian Institute o f Engineering Science and Technology\, Shibpur. Previously\, she has worke d in the Text-To-Speech team at Apple. She has also worked at Google and I nteractions LLC as a student researcher and research intern. She has worke d as an adjunct lecturer at Brooklyn College\, CUNY\, for three years and served as a Math Fellow at Hunter College. She has served as session chair in ICASSP 2024\, ICASSP 2023\, SLT 2023 and others\, and area chair at EM NLP 2023.\n DTSTART;TZID=America/New_York:20240329T120000 DTEND;TZID=America/New_York:20240329T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Soumi Maiti (CMU) “Towards Robust Speech Generation” URL:https://www.clsp.jhu.edu/events/soumi-maiti/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nOver the past decade\, the field of Speech Generation has seen significant progress in enhancing speech quality and naturalness . Despite these advancements\, persistent challenges such as speech noise\ , limited high-quality data availability\, and the lack of robustness in s peech generation systems remain. Additionally\, the evaluation of speech p resents a significant obstacle for comprehensive assessment at scale. Conc urrently\, recent breakthroughs in Large Language Models (LLMs) have revol utionized text generation and natural language processing. However\, the c omplexity of spoken language introduces unique hurdles\, including managin g long speech waveform sequences. In this presentation\, I will explore re cent innovations in speech synthesis with spoken language modeling\, evalu ation for generative speech systems and high-fidelity speech enhancement. Finally\, I will discuss prospective avenues for future research aimed at addressing these challenges.
\nBio
\nSoumi Ma iti is a postdoctoral researcher at Language Technologies Institute\, Carn egie Mellon University\, where she works on speech and language processing . Her research broadly focuses on building intelligent systems that can co mmunicate with humans naturally. She earned a Ph.D. from the Graduate Cen ter\, City University of New York (CUNY) with the Graduate Center Fellowsh ip advised by Prof Michael Mandel. She earned her B.Tech. in Computer Scie nce from the Indian Institute of Engineering Science and Technology\, Shib pur. Previously\, she has worked in the Text-To-Speech team at Apple. She has also worked at Google and Interactions LLC as a student researcher and research intern. She has worked as an adjunct lecturer at Brooklyn Colleg e\, CUNY\, for three years and served as a Math Fellow at Hunter College. She has served as session chair in ICASSP 2024\, ICASSP 2023\, SLT 2023 an d others\, and area chair at EMNLP 2023.
\n\n X-TAGS;LANGUAGE=en-US:2024\,Maiti\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24491@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240401T120000 DTEND;TZID=America/New_York:20240401T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Yuan Gong URL:https://www.clsp.jhu.edu/events/yuan-gong/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Gong END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .\nBio\nHeng Ji is a professor at Computer Science Department\, and an aff iliated faculty member at Electrical and Computer Engineering Department a nd Coordinated Science Laboratory of University of Illinois Urbana-Champai gn. She is an Amazon Scholar. She is the Founding Director of Amazon-Illin ois Center on AI for Interactive Conversational Experiences (AICE). She re ceived her B.A. and M. A. in Computational Linguistics from Tsinghua Unive rsity\, and her M.S. and Ph.D. in Computer Science from New York Universit y. Her research interests focus on Natural Language Processing\, especiall y on Multimedia Multilingual Information Extraction\, Knowledge-enhanced L arge Language Models\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates As sociation Forum\, and selected to participate in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future C ouncil on the Future of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI ( Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelli gent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper ru nner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL2020 Best Demo Paper Award\, NAAC L2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-20 18. She was invited to testify to the U.S. House Cybersecurity\, Data Anal ytics\, & IT Committee as an AI expert in 2023. She was invited by the Sec retary of the U.S. Air Force and AFRL to join Air Force Data Analytics Exp ert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledg e networks construction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN te am and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowl edge Base Population task 2010-2022. She was the associate editor for IEEE /ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2 018 and AACL-IJCNLP2022. She is elected as the North American Chapter of t he Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DA RPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney). DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n
Abstr act
\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Co mputer Engineering Department and Coordinated Science Laboratory of Univer sity of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Fo unding Director of Amazon-Illinois Center on AI for Interactive Conversati onal Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Ex traction\, Knowledge-enhanced Large Language Models\, Knowledge-driven Gen eration and Conversational AI. She was selected as a Young Scientist to at tend the 6th World Laureates Association Forum\, and selected to participa te in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future Council on the Future of Computing by the Wo rld Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL20 20 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Resear ch Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Committee as an AI expert in 2 023. She was invited by the Secretary of the U.S. Air Force and AFRL to jo in Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Inter agency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledge networks construction\, DARPA ECOLE MIRAC LE team\, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task 2010-2022. She wa s the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and La nguage Processing\, and served as the Program Committee Co-Chair of many c onferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistic s (NAACL) secretary 2020-2023. Her research has been widely supported by t he U.S. government agencies (DARPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney).
\n X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT BEGIN:VEVENT UID:ai1ec-24509@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240408T120000 DTEND;TZID=America/New_York:20240408T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman URL:https://www.clsp.jhu.edu/events/berrak-sisman/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-24511@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240412T120000 DTEND;TZID=America/New_York:20240412T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Sonal Joshi (JHU) URL:https://www.clsp.jhu.edu/events/sonal-joshi-jhu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Joshi END:VEVENT BEGIN:VEVENT UID:ai1ec-24515@www.clsp.jhu.edu DTSTAMP:20240329T080823Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240415T120000 DTEND;TZID=America/New_York:20240415T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Matthew Wipperman (Regeneron) URL:https://www.clsp.jhu.edu/events/matthew-wipperman-regeneron/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Wipperman END:VEVENT END:VCALENDAR