BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21277@www.clsp.jhu.edu DTSTAMP:20240329T051716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nAs humans\, our understanding of language is grounded in a rich mental model about “how the world works” – that we learn throug h perception and interaction. We use this understanding to reason beyond w hat we literally observe or read\, imagining how situations might unfold i n the world. Machines today struggle at this kind of reasoning\, which lim its how they can communicate with humans.In my talk\, I will discuss three lines of work to bridge this gap between machines and humans. I will firs t discuss how we might measure grounded understanding. I will introduce a suite of approaches for constructing benchmarks\, using machines in the lo op to filter out spurious biases. Next\, I will introduce PIGLeT: a model that learns physical commonsense understanding by interacting with the wor ld through simulation\, using this knowledge to ground language. From an E nglish-language description of an event\, PIGLeT can anticipate how the wo rld state might change – outperforming text-only models that are orders of magnitude larger. Finally\, I will introduce MERLOT\, which learns about situations in the world by watching millions of YouTube videos with transc ribed speech. Through training objectives inspired by the developmental ps ychology idea of multimodal reentry\, MERLOT learns to fuse language\, vis ion\, and sound together into powerful representations.Together\, these di rections suggest a path forward for building machines that learn language rooted in the world.\nBiography\nRowan Zellers is a final year PhD candida te at the University of Washington in Computer Science & Engineering\, adv ised by Yejin Choi and Ali Farhadi. His research focuses on enabling machi nes to understand language\, vision\, sound\, and the world beyond these m odalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 outstanding paper award. His work has appeared in several me dia outlets\, including Wired\, the Washington Post\, and the New York Tim es. In the past\, he graduated from Harvey Mudd College with a B.S. in Com puter Science & Mathematics\, and has interned at the Allen Institute for AI. DTSTART;TZID=America/New_York:20220214T120000 DTEND;TZID=America/New_York:20220214T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Rowan Zellers (University of Washington) ” Grounding Language by Se eing\, Hearing\, and Interacting” URL:https://www.clsp.jhu.edu/events/rowan-zellers-university-of-washington- grounding-language-by-seeing-hearing-and-interacting/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nAs humans\, our understanding of language is grounded
in a rich mental model about “how the world works” – that we learn throug
h perception and interaction. We use this understanding to reason beyond w
hat we literally observe or read\, imagining how situations might unfold i
n the world. Machines today struggle at this kind of reasoning\, which lim
its how they can communicate with humans.
In my talk\, I will discuss three lines of work to bridge
this gap between machines and humans. I will first discuss how we might m
easure grounded understanding. I will introduce a suite of approaches for
constructing benchmarks\, using machines in the loop to filter out spuriou
s biases. Next\, I will introduce PIGLeT: a model that learns physical com
monsense understanding by interacting with the world through simulation\,
using this knowledge to ground language. From an English-language descript
ion of an event\, PIGLeT can anticipate how the world state might change –
outperforming text-only models that are orders of magnitude larger. Final
ly\, I will introduce MERLOT\, which learns about situations in the world
by watching millions of YouTube videos with transcribed speech. Through tr
aining objectives inspired by the developmental psychology idea of multimo
dal reentry\, MERLOT learns to fuse language\, vision\, and sound together
into powerful representations.
Together\, these directions suggest a path forward for building mac
hines that learn language rooted in the world.
Biography strong>
\nRowan Zellers is a final year PhD candidate at the Univers ity of Washington in Computer Science & Engineering\, advised by Yejin Cho i and Ali Farhadi. His research focuses on enabling machines to understand language\, vision\, sound\, and the world beyond these modalities. He has been recognized through an NSF Graduate Fellowship and a NeurIPS 2021 out standing paper award. His work has appeared in several media outlets\, inc luding Wired\, the Washington Post\, and the New York Times. In the past\, he graduated from Harvey Mudd College with a B.S. in Computer Science & M athematics\, and has interned at the Allen Institute for AI.
\n< /HTML> X-TAGS;LANGUAGE=en-US:2022\,February\,Zellers END:VEVENT BEGIN:VEVENT UID:ai1ec-21487@www.clsp.jhu.edu DTSTAMP:20240329T051716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nEnormous amounts of ever-changing knowledge are avai lable online in diverse textual styles and diverse formats. Recent advance s in deep learning algorithms and large-scale datasets are spurring progre ss in many Natural Language Processing (NLP) tasks\, including question an swering. Nevertheless\, these models cannot scale up when task-annotated t raining data are scarce. This talk presents my lab’s work toward building general-purpose models in NLP and how to systematically evaluate them. Fir st\, I present a general model for two known tasks of question answering i n English and multiple languages that are robust to small domain shifts. Then\, I show a meta-training approach that can solve a variety of NLP tas ks with only using a few examples and introduce a benchmark to evaluate cr oss-task generalization. Finally\, I discuss neuro-symbolic approaches to address more complex tasks by eliciting knowledge from structured data and language models.\n\nBiography\n\nHanna Hajishirzi is an Assistant Profess or in the Paul G. Allen School of Computer Science & Engineering at the Un iversity of Washington and a Senior Research Manager at the Allen Institut e for AI. Her research spans different areas in NLP and AI\, focusing on d eveloping general-purpose machine learning algorithms that can solve many NLP tasks. Applications for these algorithms include question answering\, representation learning\, green AI\, knowledge extraction\, and conversati onal dialogue. Honors include the NSF CAREER Award\, Sloan Fellowship\, Al len Distinguished Investigator Award\, Intel rising star award\, best pape r and honorable mention awards\, and several industry research faculty awa rds. Hanna received her PhD from University of Illinois and spent a year a s a postdoc at Disney Research and CMU. DTSTART;TZID=America/New_York:20220225T120000 DTEND;TZID=America/New_York:20220225T131500 LOCATION:Ames Hall 234 - Presented Virtually Via Zoom https://wse.zoom.us/j /96735183473 SEQUENCE:0 SUMMARY:Hanna Hajishirzi (University of Washington & Allen Institute for AI ) “Toward Robust\, Knowledge-Rich NLP” URL:https://www.clsp.jhu.edu/events/hanna-hajishirzi-university-of-washingt on-allen-institute-for-ai-toward-robust-knowledge-rich-nlp/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nVoice conversion (VC) is a significant aspect of arti ficial intelligence. It is the study of how to convert one’s voice to soun d like that of another without changing the linguistic content. Voice conv ersion belongs to a general technical field of speech synthesis\, which co nverts text to speech or changes the properties of speech\, for example\, voice identity\, emotion\, and accents. Voice conversion involves multiple speech processing techniques\, such as speech analysis\, spectral convers ion\, prosody conversion\, speaker characterization\, and vocoding. With t he recent advances in theory and practice\, we are now able to produce hum an-like voice quality with high speaker similarity. In this talk\, Dr. Sis man will present the recent advances in voice conversion and discuss their promise and limitations. Dr. Sisman will also provide a summary of the av ailable resources for expressive voice conversion research.
\nDr. Berrak Sisman (Member\, IEEE) received th e Ph.D. degree in electrical and computer engineering from National Univer sity of Singapore in 2020\, fully funded by A*STAR Graduate Academy under Singapore International Graduate Award (SINGA). She is currently working a s a tenure-track Assistant Professor at the Erik Jonsson School Department of Electrical and Computer Engineering at University of Texas at Dallas\, United States. Prior to joining UT Dallas\, she was a faculty member at S ingapore University of Technology and Design (2020-2022). She was a Postdo ctoral Research Fellow at the National University of Singapore (2019-2020) . She was an exchange doctoral student at the University of Edinburgh and a visiting scholar at The Centre for Speech Technology Research (CSTR)\, U niversity of Edinburgh (2019). She was a visiting researcher at RIKEN Adva nced Intelligence Project in Japan (2018). Her research is focused on mach ine learning\, signal processing\, emotion\, speech synthesis and voice co nversion.
\nDr. Sisman has served as the Area Chair at INTERSPEECH 2 021\, INTERSPEECH 2022\, IEEE SLT 2022 and as the Publication Chair at ICA SSP 2022. She has been elected as a member of the IEEE Speech and Language Processing Technical Committee (SLTC) in the area of Speech Synthesis for the term from January 2022 to December 2024. She plays leadership roles i n conference organizations and active in technical committees. She has ser ved as the General Coordinator of the Student Advisory Committee (SAC) of International Speech Communication Association (ISCA).
\n X-TAGS;LANGUAGE=en-US:2022\,November\,Sisman END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240329T051716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.\nBio\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\, supporting a team of researche rs and engineers working on multi-modal (image\, video\, audio\, text) con tent understanding for Meta’s Family of Apps (Instagram\, Threads\, Facebo ok\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Computer Science’s Language Technolog ies Institute\, where he still is an Adjunct Professor. He is also a co-fo under of Abridge\, a company working on extracting information from doctor patient conversations. His work covers many areas of speech recognition a nd multi-media analysis with a focus on end-to-end deep learning. Currentl y\, he focuses on multi-modal processing of videos\, and using that inform ation to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech processing\, speech recognition with ar ticulatory features\, large-scale multi-media retrieval and summarization\ , information extraction from medical interviews\, and recognition of pers onality or similar meta-data from speech.\nFor more information\, please s ee http://www.cs.cmu.edu/directory/fmetze\n DTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.
\nBio
\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\ , supporting a team of researchers and engineers working on multi-modal (i mage\, video\, audio\, text) content understanding for Meta’s Family of Ap ps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Compu ter Science’s Language Technologies Institute\, where he still is an Adjun ct Professor. He is also a co-founder of Abridge\, a company working on ex tracting information from doctor patient conversations. His work covers ma ny areas of speech recognition and multi-media analysis with a focus on en d-to-end deep learning. Currently\, he focuses on multi-modal processing o f videos\, and using that information to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech process ing\, speech recognition with articulatory features\, large-scale multi-me dia retrieval and summarization\, information extraction from medical inte rviews\, and recognition of personality or similar meta-data from speech.< /p>\n
For more information\, please see http://www.cs.cmu.edu/directory/fmetze
\n\n X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-24509@www.clsp.jhu.edu DTSTAMP:20240329T051716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240408T120000 DTEND;TZID=America/New_York:20240408T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Berrak Sisman URL:https://www.clsp.jhu.edu/events/berrak-sisman/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Sisman END:VEVENT END:VCALENDAR